00:00:00.000 Started by upstream project "autotest-per-patch" build number 132727 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.106 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.179 Fetching changes from the remote Git repository 00:00:00.184 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.248 Using shallow fetch with depth 1 00:00:00.248 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.248 > git --version # timeout=10 00:00:00.301 > git --version # 'git version 2.39.2' 00:00:00.301 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.331 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.331 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.233 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.246 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.259 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.259 > git config core.sparsecheckout # timeout=10 00:00:07.272 > git read-tree -mu HEAD # timeout=10 00:00:07.306 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.328 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.328 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.412 [Pipeline] Start of Pipeline 00:00:07.423 [Pipeline] library 00:00:07.424 Loading library shm_lib@master 00:00:07.424 Library shm_lib@master is cached. Copying from home. 00:00:07.440 [Pipeline] node 00:00:07.446 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.447 [Pipeline] { 00:00:07.456 [Pipeline] catchError 00:00:07.457 [Pipeline] { 00:00:07.465 [Pipeline] wrap 00:00:07.471 [Pipeline] { 00:00:07.479 [Pipeline] stage 00:00:07.481 [Pipeline] { (Prologue) 00:00:07.497 [Pipeline] echo 00:00:07.499 Node: VM-host-SM0 00:00:07.505 [Pipeline] cleanWs 00:00:07.515 [WS-CLEANUP] Deleting project workspace... 00:00:07.515 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.520 [WS-CLEANUP] done 00:00:07.733 [Pipeline] setCustomBuildProperty 00:00:07.792 [Pipeline] httpRequest 00:00:08.212 [Pipeline] echo 00:00:08.215 Sorcerer 10.211.164.101 is alive 00:00:08.226 [Pipeline] retry 00:00:08.228 [Pipeline] { 00:00:08.242 [Pipeline] httpRequest 00:00:08.247 HttpMethod: GET 00:00:08.247 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.247 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.254 Response Code: HTTP/1.1 200 OK 00:00:08.254 Success: Status code 200 is in the accepted range: 200,404 00:00:08.254 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.030 [Pipeline] } 00:00:24.043 [Pipeline] // retry 00:00:24.050 [Pipeline] sh 00:00:24.328 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.403 [Pipeline] httpRequest 00:00:24.758 [Pipeline] echo 00:00:24.760 Sorcerer 10.211.164.101 is alive 00:00:24.770 [Pipeline] retry 00:00:24.772 [Pipeline] { 00:00:24.785 [Pipeline] httpRequest 00:00:24.789 HttpMethod: GET 00:00:24.790 URL: http://10.211.164.101/packages/spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:00:24.790 Sending request to url: http://10.211.164.101/packages/spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:00:24.794 Response Code: HTTP/1.1 200 OK 00:00:24.795 Success: Status code 200 is in the accepted range: 200,404 00:00:24.795 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:05:22.613 [Pipeline] } 00:05:22.633 [Pipeline] // retry 00:05:22.643 [Pipeline] sh 00:05:22.923 + tar --no-same-owner -xf spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:05:26.212 [Pipeline] sh 00:05:26.491 + git -C spdk log --oneline -n5 00:05:26.491 e9db16374 nvme: add spdk_nvme_poll_group_get_fd_group() 00:05:26.491 cf089b398 thread: fd_group-based interrupts 00:05:26.491 8a4656bc1 thread: move interrupt allocation to a function 00:05:26.491 09908f908 util: add method for setting fd_group's wrapper 00:05:26.491 697130caf util: multi-level fd_group nesting 00:05:26.509 [Pipeline] writeFile 00:05:26.523 [Pipeline] sh 00:05:26.803 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:26.813 [Pipeline] sh 00:05:27.091 + cat autorun-spdk.conf 00:05:27.091 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:27.091 SPDK_TEST_NVME=1 00:05:27.091 SPDK_TEST_FTL=1 00:05:27.091 SPDK_TEST_ISAL=1 00:05:27.091 SPDK_RUN_ASAN=1 00:05:27.091 SPDK_RUN_UBSAN=1 00:05:27.091 SPDK_TEST_XNVME=1 00:05:27.091 SPDK_TEST_NVME_FDP=1 00:05:27.091 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:27.097 RUN_NIGHTLY=0 00:05:27.098 [Pipeline] } 00:05:27.112 [Pipeline] // stage 00:05:27.129 [Pipeline] stage 00:05:27.132 [Pipeline] { (Run VM) 00:05:27.145 [Pipeline] sh 00:05:27.424 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:27.424 + echo 'Start stage prepare_nvme.sh' 00:05:27.424 Start stage prepare_nvme.sh 00:05:27.424 + [[ -n 7 ]] 00:05:27.424 + disk_prefix=ex7 00:05:27.424 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:05:27.424 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:05:27.424 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:05:27.424 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:27.424 ++ SPDK_TEST_NVME=1 00:05:27.424 ++ SPDK_TEST_FTL=1 00:05:27.424 ++ SPDK_TEST_ISAL=1 00:05:27.424 ++ SPDK_RUN_ASAN=1 00:05:27.424 ++ SPDK_RUN_UBSAN=1 00:05:27.424 ++ SPDK_TEST_XNVME=1 00:05:27.424 ++ SPDK_TEST_NVME_FDP=1 00:05:27.424 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:27.424 ++ RUN_NIGHTLY=0 00:05:27.424 + cd /var/jenkins/workspace/nvme-vg-autotest 00:05:27.424 + nvme_files=() 00:05:27.424 + declare -A nvme_files 00:05:27.424 + backend_dir=/var/lib/libvirt/images/backends 00:05:27.424 + nvme_files['nvme.img']=5G 00:05:27.424 + nvme_files['nvme-cmb.img']=5G 00:05:27.424 + nvme_files['nvme-multi0.img']=4G 00:05:27.424 + nvme_files['nvme-multi1.img']=4G 00:05:27.424 + nvme_files['nvme-multi2.img']=4G 00:05:27.424 + nvme_files['nvme-openstack.img']=8G 00:05:27.424 + nvme_files['nvme-zns.img']=5G 00:05:27.424 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:27.424 + (( SPDK_TEST_FTL == 1 )) 00:05:27.424 + nvme_files["nvme-ftl.img"]=6G 00:05:27.424 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:27.425 + nvme_files["nvme-fdp.img"]=1G 00:05:27.425 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:27.425 + for nvme in "${!nvme_files[@]}" 00:05:27.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:05:27.425 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:27.425 + for nvme in "${!nvme_files[@]}" 00:05:27.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:05:27.425 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:27.425 + for nvme in "${!nvme_files[@]}" 00:05:27.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:05:27.425 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:27.425 + for nvme in "${!nvme_files[@]}" 00:05:27.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:05:27.425 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:27.425 + for nvme in "${!nvme_files[@]}" 00:05:27.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:05:27.425 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:27.425 + for nvme in "${!nvme_files[@]}" 00:05:27.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:05:27.682 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:27.682 + for nvme in "${!nvme_files[@]}" 00:05:27.682 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:05:27.682 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:27.683 + for nvme in "${!nvme_files[@]}" 00:05:27.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:05:27.683 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:27.683 + for nvme in "${!nvme_files[@]}" 00:05:27.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:05:27.940 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:27.940 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:05:27.940 + echo 'End stage prepare_nvme.sh' 00:05:27.940 End stage prepare_nvme.sh 00:05:27.951 [Pipeline] sh 00:05:28.233 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:28.233 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:28.520 00:05:28.520 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:05:28.520 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:05:28.520 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:05:28.520 HELP=0 00:05:28.520 DRY_RUN=0 00:05:28.520 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:05:28.520 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:28.520 NVME_AUTO_CREATE=0 00:05:28.520 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:05:28.520 NVME_CMB=,,,, 00:05:28.520 NVME_PMR=,,,, 00:05:28.520 NVME_ZNS=,,,, 00:05:28.520 NVME_MS=true,,,, 00:05:28.520 NVME_FDP=,,,on, 00:05:28.520 SPDK_VAGRANT_DISTRO=fedora39 00:05:28.520 SPDK_VAGRANT_VMCPU=10 00:05:28.520 SPDK_VAGRANT_VMRAM=12288 00:05:28.520 SPDK_VAGRANT_PROVIDER=libvirt 00:05:28.520 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:28.520 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:28.520 SPDK_OPENSTACK_NETWORK=0 00:05:28.520 VAGRANT_PACKAGE_BOX=0 00:05:28.520 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:28.520 FORCE_DISTRO=true 00:05:28.520 VAGRANT_BOX_VERSION= 00:05:28.520 EXTRA_VAGRANTFILES= 00:05:28.520 NIC_MODEL=e1000 00:05:28.520 00:05:28.520 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:05:28.520 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:05:31.834 Bringing machine 'default' up with 'libvirt' provider... 00:05:32.767 ==> default: Creating image (snapshot of base box volume). 00:05:32.767 ==> default: Creating domain with the following settings... 00:05:32.767 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733490019_75a0d58fe672cbe55fd8 00:05:32.767 ==> default: -- Domain type: kvm 00:05:32.767 ==> default: -- Cpus: 10 00:05:32.767 ==> default: -- Feature: acpi 00:05:32.767 ==> default: -- Feature: apic 00:05:32.767 ==> default: -- Feature: pae 00:05:32.767 ==> default: -- Memory: 12288M 00:05:32.767 ==> default: -- Memory Backing: hugepages: 00:05:32.767 ==> default: -- Management MAC: 00:05:32.767 ==> default: -- Loader: 00:05:32.767 ==> default: -- Nvram: 00:05:32.767 ==> default: -- Base box: spdk/fedora39 00:05:32.767 ==> default: -- Storage pool: default 00:05:32.767 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733490019_75a0d58fe672cbe55fd8.img (20G) 00:05:32.767 ==> default: -- Volume Cache: default 00:05:32.767 ==> default: -- Kernel: 00:05:32.767 ==> default: -- Initrd: 00:05:32.767 ==> default: -- Graphics Type: vnc 00:05:32.767 ==> default: -- Graphics Port: -1 00:05:32.767 ==> default: -- Graphics IP: 127.0.0.1 00:05:32.767 ==> default: -- Graphics Password: Not defined 00:05:32.767 ==> default: -- Video Type: cirrus 00:05:32.767 ==> default: -- Video VRAM: 9216 00:05:32.767 ==> default: -- Sound Type: 00:05:32.767 ==> default: -- Keymap: en-us 00:05:32.767 ==> default: -- TPM Path: 00:05:32.767 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:32.767 ==> default: -- Command line args: 00:05:32.767 ==> default: -> value=-device, 00:05:32.767 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:32.767 ==> default: -> value=-drive, 00:05:32.767 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:32.767 ==> default: -> value=-device, 00:05:32.767 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:32.767 ==> default: -> value=-device, 00:05:32.767 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:32.767 ==> default: -> value=-drive, 00:05:32.767 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:05:32.767 ==> default: -> value=-device, 00:05:32.767 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:32.767 ==> default: -> value=-device, 00:05:32.768 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:32.768 ==> default: -> value=-drive, 00:05:32.768 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:32.768 ==> default: -> value=-device, 00:05:32.768 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:32.768 ==> default: -> value=-drive, 00:05:32.768 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:32.768 ==> default: -> value=-device, 00:05:32.768 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:32.768 ==> default: -> value=-drive, 00:05:32.768 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:32.768 ==> default: -> value=-device, 00:05:32.768 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:32.768 ==> default: -> value=-device, 00:05:32.768 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:32.768 ==> default: -> value=-device, 00:05:32.768 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:32.768 ==> default: -> value=-drive, 00:05:32.768 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:32.768 ==> default: -> value=-device, 00:05:32.768 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:33.024 ==> default: Creating shared folders metadata... 00:05:33.024 ==> default: Starting domain. 00:05:34.916 ==> default: Waiting for domain to get an IP address... 00:05:53.000 ==> default: Waiting for SSH to become available... 00:05:53.000 ==> default: Configuring and enabling network interfaces... 00:05:56.298 default: SSH address: 192.168.121.56:22 00:05:56.298 default: SSH username: vagrant 00:05:56.298 default: SSH auth method: private key 00:05:58.834 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:06.938 ==> default: Mounting SSHFS shared folder... 00:06:07.869 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:07.869 ==> default: Checking Mount.. 00:06:09.240 ==> default: Folder Successfully Mounted! 00:06:09.240 ==> default: Running provisioner: file... 00:06:10.173 default: ~/.gitconfig => .gitconfig 00:06:10.431 00:06:10.431 SUCCESS! 00:06:10.431 00:06:10.431 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:10.431 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:10.431 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:10.431 00:06:10.438 [Pipeline] } 00:06:10.452 [Pipeline] // stage 00:06:10.461 [Pipeline] dir 00:06:10.461 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:06:10.463 [Pipeline] { 00:06:10.474 [Pipeline] catchError 00:06:10.475 [Pipeline] { 00:06:10.490 [Pipeline] sh 00:06:10.772 + vagrant ssh-config --host vagrant 00:06:10.772 + + sed -ne /^Host/,$ptee 00:06:10.772 ssh_conf 00:06:14.957 Host vagrant 00:06:14.957 HostName 192.168.121.56 00:06:14.957 User vagrant 00:06:14.957 Port 22 00:06:14.957 UserKnownHostsFile /dev/null 00:06:14.957 StrictHostKeyChecking no 00:06:14.957 PasswordAuthentication no 00:06:14.957 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:14.957 IdentitiesOnly yes 00:06:14.957 LogLevel FATAL 00:06:14.957 ForwardAgent yes 00:06:14.957 ForwardX11 yes 00:06:14.957 00:06:14.970 [Pipeline] withEnv 00:06:14.973 [Pipeline] { 00:06:14.984 [Pipeline] sh 00:06:15.263 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:15.263 source /etc/os-release 00:06:15.263 [[ -e /image.version ]] && img=$(< /image.version) 00:06:15.263 # Minimal, systemd-like check. 00:06:15.263 if [[ -e /.dockerenv ]]; then 00:06:15.263 # Clear garbage from the node's name: 00:06:15.263 # agt-er_autotest_547-896 -> autotest_547-896 00:06:15.263 # $HOSTNAME is the actual container id 00:06:15.263 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:15.263 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:15.263 # We can assume this is a mount from a host where container is running, 00:06:15.263 # so fetch its hostname to easily identify the target swarm worker. 00:06:15.263 container="$(< /etc/hostname) ($agent)" 00:06:15.263 else 00:06:15.263 # Fallback 00:06:15.263 container=$agent 00:06:15.263 fi 00:06:15.263 fi 00:06:15.263 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:15.263 00:06:15.532 [Pipeline] } 00:06:15.549 [Pipeline] // withEnv 00:06:15.557 [Pipeline] setCustomBuildProperty 00:06:15.571 [Pipeline] stage 00:06:15.572 [Pipeline] { (Tests) 00:06:15.586 [Pipeline] sh 00:06:15.863 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:16.134 [Pipeline] sh 00:06:16.411 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:16.775 [Pipeline] timeout 00:06:16.775 Timeout set to expire in 50 min 00:06:16.777 [Pipeline] { 00:06:16.790 [Pipeline] sh 00:06:17.067 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:17.632 HEAD is now at e9db16374 nvme: add spdk_nvme_poll_group_get_fd_group() 00:06:17.642 [Pipeline] sh 00:06:17.917 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:18.188 [Pipeline] sh 00:06:18.465 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:18.740 [Pipeline] sh 00:06:19.022 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:06:19.022 ++ readlink -f spdk_repo 00:06:19.279 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:19.279 + [[ -n /home/vagrant/spdk_repo ]] 00:06:19.279 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:19.279 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:19.279 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:19.279 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:19.279 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:19.279 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:19.279 + cd /home/vagrant/spdk_repo 00:06:19.279 + source /etc/os-release 00:06:19.279 ++ NAME='Fedora Linux' 00:06:19.279 ++ VERSION='39 (Cloud Edition)' 00:06:19.279 ++ ID=fedora 00:06:19.279 ++ VERSION_ID=39 00:06:19.279 ++ VERSION_CODENAME= 00:06:19.279 ++ PLATFORM_ID=platform:f39 00:06:19.279 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:19.279 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:19.279 ++ LOGO=fedora-logo-icon 00:06:19.279 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:19.279 ++ HOME_URL=https://fedoraproject.org/ 00:06:19.279 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:19.279 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:19.279 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:19.279 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:19.279 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:19.279 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:19.279 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:19.279 ++ SUPPORT_END=2024-11-12 00:06:19.279 ++ VARIANT='Cloud Edition' 00:06:19.279 ++ VARIANT_ID=cloud 00:06:19.279 + uname -a 00:06:19.279 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:19.279 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:19.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:19.808 Hugepages 00:06:19.808 node hugesize free / total 00:06:19.808 node0 1048576kB 0 / 0 00:06:19.808 node0 2048kB 0 / 0 00:06:19.808 00:06:19.808 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:19.808 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:19.808 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:20.067 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:20.067 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:20.067 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:20.067 + rm -f /tmp/spdk-ld-path 00:06:20.067 + source autorun-spdk.conf 00:06:20.067 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:20.068 ++ SPDK_TEST_NVME=1 00:06:20.068 ++ SPDK_TEST_FTL=1 00:06:20.068 ++ SPDK_TEST_ISAL=1 00:06:20.068 ++ SPDK_RUN_ASAN=1 00:06:20.068 ++ SPDK_RUN_UBSAN=1 00:06:20.068 ++ SPDK_TEST_XNVME=1 00:06:20.068 ++ SPDK_TEST_NVME_FDP=1 00:06:20.068 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:20.068 ++ RUN_NIGHTLY=0 00:06:20.068 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:20.068 + [[ -n '' ]] 00:06:20.068 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:20.068 + for M in /var/spdk/build-*-manifest.txt 00:06:20.068 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:20.068 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:20.068 + for M in /var/spdk/build-*-manifest.txt 00:06:20.068 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:20.068 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:20.068 + for M in /var/spdk/build-*-manifest.txt 00:06:20.068 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:20.068 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:20.068 ++ uname 00:06:20.068 + [[ Linux == \L\i\n\u\x ]] 00:06:20.068 + sudo dmesg -T 00:06:20.068 + sudo dmesg --clear 00:06:20.068 + dmesg_pid=5293 00:06:20.068 + [[ Fedora Linux == FreeBSD ]] 00:06:20.068 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:20.068 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:20.068 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:20.068 + [[ -x /usr/src/fio-static/fio ]] 00:06:20.068 + sudo dmesg -Tw 00:06:20.068 + export FIO_BIN=/usr/src/fio-static/fio 00:06:20.068 + FIO_BIN=/usr/src/fio-static/fio 00:06:20.068 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:20.068 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:20.068 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:20.068 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:20.068 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:20.068 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:20.068 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:20.068 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:20.068 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:20.068 13:01:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:20.068 13:01:07 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:20.068 13:01:07 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:20.068 13:01:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:20.068 13:01:07 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:20.326 13:01:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:20.326 13:01:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:20.326 13:01:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:20.326 13:01:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:20.326 13:01:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.326 13:01:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.326 13:01:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.326 13:01:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.326 13:01:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.326 13:01:07 -- paths/export.sh@5 -- $ export PATH 00:06:20.327 13:01:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.327 13:01:07 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:20.327 13:01:07 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:20.327 13:01:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733490067.XXXXXX 00:06:20.327 13:01:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733490067.4TZs2m 00:06:20.327 13:01:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:20.327 13:01:07 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:20.327 13:01:07 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:20.327 13:01:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:20.327 13:01:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:20.327 13:01:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:20.327 13:01:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:20.327 13:01:07 -- common/autotest_common.sh@10 -- $ set +x 00:06:20.327 13:01:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:20.327 13:01:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:20.327 13:01:07 -- pm/common@17 -- $ local monitor 00:06:20.327 13:01:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:20.327 13:01:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:20.327 13:01:07 -- pm/common@25 -- $ sleep 1 00:06:20.327 13:01:07 -- pm/common@21 -- $ date +%s 00:06:20.327 13:01:07 -- pm/common@21 -- $ date +%s 00:06:20.327 13:01:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490067 00:06:20.327 13:01:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490067 00:06:20.327 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490067_collect-cpu-load.pm.log 00:06:20.327 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490067_collect-vmstat.pm.log 00:06:21.258 13:01:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:21.258 13:01:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:21.258 13:01:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:21.258 13:01:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:21.258 13:01:08 -- spdk/autobuild.sh@16 -- $ date -u 00:06:21.258 Fri Dec 6 01:01:08 PM UTC 2024 00:06:21.258 13:01:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:21.258 v25.01-pre-309-ge9db16374 00:06:21.258 13:01:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:21.258 13:01:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:21.258 13:01:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:21.258 13:01:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:21.258 13:01:08 -- common/autotest_common.sh@10 -- $ set +x 00:06:21.258 ************************************ 00:06:21.258 START TEST asan 00:06:21.258 ************************************ 00:06:21.258 using asan 00:06:21.258 13:01:08 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:21.258 00:06:21.258 real 0m0.000s 00:06:21.258 user 0m0.000s 00:06:21.258 sys 0m0.000s 00:06:21.258 13:01:08 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:21.258 ************************************ 00:06:21.258 END TEST asan 00:06:21.258 13:01:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:21.258 ************************************ 00:06:21.258 13:01:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:21.258 13:01:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:21.258 13:01:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:21.258 13:01:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:21.258 13:01:08 -- common/autotest_common.sh@10 -- $ set +x 00:06:21.258 ************************************ 00:06:21.258 START TEST ubsan 00:06:21.258 ************************************ 00:06:21.258 using ubsan 00:06:21.258 13:01:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:21.258 00:06:21.258 real 0m0.000s 00:06:21.258 user 0m0.000s 00:06:21.258 sys 0m0.000s 00:06:21.258 ************************************ 00:06:21.258 13:01:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:21.258 13:01:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:21.258 END TEST ubsan 00:06:21.258 ************************************ 00:06:21.258 13:01:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:21.258 13:01:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:21.258 13:01:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:21.258 13:01:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:21.258 13:01:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:21.258 13:01:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:21.258 13:01:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:21.258 13:01:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:21.258 13:01:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:06:21.515 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:21.515 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:22.080 Using 'verbs' RDMA provider 00:06:37.881 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:50.076 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:50.076 Creating mk/config.mk...done. 00:06:50.076 Creating mk/cc.flags.mk...done. 00:06:50.076 Type 'make' to build. 00:06:50.076 13:01:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:50.076 13:01:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:50.076 13:01:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:50.076 13:01:36 -- common/autotest_common.sh@10 -- $ set +x 00:06:50.077 ************************************ 00:06:50.077 START TEST make 00:06:50.077 ************************************ 00:06:50.077 13:01:36 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:50.077 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:50.077 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:50.077 meson setup builddir \ 00:06:50.077 -Dwith-libaio=enabled \ 00:06:50.077 -Dwith-liburing=enabled \ 00:06:50.077 -Dwith-libvfn=disabled \ 00:06:50.077 -Dwith-spdk=disabled \ 00:06:50.077 -Dexamples=false \ 00:06:50.077 -Dtests=false \ 00:06:50.077 -Dtools=false && \ 00:06:50.077 meson compile -C builddir && \ 00:06:50.077 cd -) 00:06:50.077 make[1]: Nothing to be done for 'all'. 00:06:52.606 The Meson build system 00:06:52.606 Version: 1.5.0 00:06:52.606 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:52.606 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:52.606 Build type: native build 00:06:52.606 Project name: xnvme 00:06:52.606 Project version: 0.7.5 00:06:52.606 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:52.606 C linker for the host machine: cc ld.bfd 2.40-14 00:06:52.606 Host machine cpu family: x86_64 00:06:52.606 Host machine cpu: x86_64 00:06:52.606 Message: host_machine.system: linux 00:06:52.606 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:52.606 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:52.606 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:52.606 Run-time dependency threads found: YES 00:06:52.606 Has header "setupapi.h" : NO 00:06:52.606 Has header "linux/blkzoned.h" : YES 00:06:52.606 Has header "linux/blkzoned.h" : YES (cached) 00:06:52.606 Has header "libaio.h" : YES 00:06:52.606 Library aio found: YES 00:06:52.606 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:52.606 Run-time dependency liburing found: YES 2.2 00:06:52.606 Dependency libvfn skipped: feature with-libvfn disabled 00:06:52.606 Found CMake: /usr/bin/cmake (3.27.7) 00:06:52.606 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:52.606 Subproject spdk : skipped: feature with-spdk disabled 00:06:52.606 Run-time dependency appleframeworks found: NO (tried framework) 00:06:52.606 Run-time dependency appleframeworks found: NO (tried framework) 00:06:52.606 Library rt found: YES 00:06:52.606 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:52.606 Configuring xnvme_config.h using configuration 00:06:52.606 Configuring xnvme.spec using configuration 00:06:52.606 Run-time dependency bash-completion found: YES 2.11 00:06:52.606 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:52.606 Program cp found: YES (/usr/bin/cp) 00:06:52.606 Build targets in project: 3 00:06:52.606 00:06:52.606 xnvme 0.7.5 00:06:52.606 00:06:52.606 Subprojects 00:06:52.606 spdk : NO Feature 'with-spdk' disabled 00:06:52.606 00:06:52.606 User defined options 00:06:52.606 examples : false 00:06:52.606 tests : false 00:06:52.606 tools : false 00:06:52.606 with-libaio : enabled 00:06:52.606 with-liburing: enabled 00:06:52.606 with-libvfn : disabled 00:06:52.606 with-spdk : disabled 00:06:52.606 00:06:52.606 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:53.540 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:53.540 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:53.540 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:53.540 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:53.540 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:53.540 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:53.540 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:53.540 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:53.540 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:53.540 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:53.540 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:53.540 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:53.540 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:53.797 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:53.797 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:53.797 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:53.797 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:53.797 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:53.797 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:53.797 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:53.797 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:53.797 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:53.797 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:53.797 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:53.797 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:53.797 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:53.797 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:54.054 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:54.054 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:54.054 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:54.054 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:54.054 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:54.054 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:54.054 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:54.054 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:54.054 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:54.054 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:54.054 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:54.054 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:54.054 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:54.054 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:54.054 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:54.054 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:54.054 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:54.054 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:54.054 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:54.054 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:54.054 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:54.054 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:54.054 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:54.054 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:54.054 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:54.054 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:54.312 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:54.312 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:54.312 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:54.312 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:54.312 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:54.312 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:54.312 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:54.312 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:54.312 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:54.312 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:54.312 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:54.312 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:54.570 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:54.570 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:54.570 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:54.570 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:54.570 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:54.570 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:54.570 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:54.570 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:54.570 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:55.134 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:55.134 [75/76] Linking static target lib/libxnvme.a 00:06:55.134 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:55.134 INFO: autodetecting backend as ninja 00:06:55.134 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:55.134 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:05.139 The Meson build system 00:07:05.139 Version: 1.5.0 00:07:05.139 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:05.139 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:05.139 Build type: native build 00:07:05.139 Program cat found: YES (/usr/bin/cat) 00:07:05.139 Project name: DPDK 00:07:05.139 Project version: 24.03.0 00:07:05.139 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:05.139 C linker for the host machine: cc ld.bfd 2.40-14 00:07:05.139 Host machine cpu family: x86_64 00:07:05.139 Host machine cpu: x86_64 00:07:05.139 Message: ## Building in Developer Mode ## 00:07:05.139 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:05.139 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:05.139 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:05.139 Program python3 found: YES (/usr/bin/python3) 00:07:05.139 Program cat found: YES (/usr/bin/cat) 00:07:05.139 Compiler for C supports arguments -march=native: YES 00:07:05.139 Checking for size of "void *" : 8 00:07:05.139 Checking for size of "void *" : 8 (cached) 00:07:05.139 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:05.139 Library m found: YES 00:07:05.139 Library numa found: YES 00:07:05.139 Has header "numaif.h" : YES 00:07:05.139 Library fdt found: NO 00:07:05.139 Library execinfo found: NO 00:07:05.139 Has header "execinfo.h" : YES 00:07:05.139 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:05.139 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:05.139 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:05.139 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:05.139 Run-time dependency openssl found: YES 3.1.1 00:07:05.139 Run-time dependency libpcap found: YES 1.10.4 00:07:05.139 Has header "pcap.h" with dependency libpcap: YES 00:07:05.139 Compiler for C supports arguments -Wcast-qual: YES 00:07:05.139 Compiler for C supports arguments -Wdeprecated: YES 00:07:05.139 Compiler for C supports arguments -Wformat: YES 00:07:05.139 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:05.139 Compiler for C supports arguments -Wformat-security: NO 00:07:05.139 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:05.139 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:05.139 Compiler for C supports arguments -Wnested-externs: YES 00:07:05.139 Compiler for C supports arguments -Wold-style-definition: YES 00:07:05.139 Compiler for C supports arguments -Wpointer-arith: YES 00:07:05.139 Compiler for C supports arguments -Wsign-compare: YES 00:07:05.139 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:05.139 Compiler for C supports arguments -Wundef: YES 00:07:05.139 Compiler for C supports arguments -Wwrite-strings: YES 00:07:05.139 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:05.139 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:05.139 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:05.139 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:05.139 Program objdump found: YES (/usr/bin/objdump) 00:07:05.139 Compiler for C supports arguments -mavx512f: YES 00:07:05.139 Checking if "AVX512 checking" compiles: YES 00:07:05.139 Fetching value of define "__SSE4_2__" : 1 00:07:05.139 Fetching value of define "__AES__" : 1 00:07:05.139 Fetching value of define "__AVX__" : 1 00:07:05.139 Fetching value of define "__AVX2__" : 1 00:07:05.139 Fetching value of define "__AVX512BW__" : (undefined) 00:07:05.139 Fetching value of define "__AVX512CD__" : (undefined) 00:07:05.139 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:05.139 Fetching value of define "__AVX512F__" : (undefined) 00:07:05.139 Fetching value of define "__AVX512VL__" : (undefined) 00:07:05.139 Fetching value of define "__PCLMUL__" : 1 00:07:05.139 Fetching value of define "__RDRND__" : 1 00:07:05.139 Fetching value of define "__RDSEED__" : 1 00:07:05.139 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:05.139 Fetching value of define "__znver1__" : (undefined) 00:07:05.139 Fetching value of define "__znver2__" : (undefined) 00:07:05.139 Fetching value of define "__znver3__" : (undefined) 00:07:05.139 Fetching value of define "__znver4__" : (undefined) 00:07:05.139 Library asan found: YES 00:07:05.139 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:05.139 Message: lib/log: Defining dependency "log" 00:07:05.139 Message: lib/kvargs: Defining dependency "kvargs" 00:07:05.139 Message: lib/telemetry: Defining dependency "telemetry" 00:07:05.139 Library rt found: YES 00:07:05.139 Checking for function "getentropy" : NO 00:07:05.139 Message: lib/eal: Defining dependency "eal" 00:07:05.139 Message: lib/ring: Defining dependency "ring" 00:07:05.139 Message: lib/rcu: Defining dependency "rcu" 00:07:05.139 Message: lib/mempool: Defining dependency "mempool" 00:07:05.139 Message: lib/mbuf: Defining dependency "mbuf" 00:07:05.139 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:05.139 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:05.139 Compiler for C supports arguments -mpclmul: YES 00:07:05.139 Compiler for C supports arguments -maes: YES 00:07:05.139 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:05.139 Compiler for C supports arguments -mavx512bw: YES 00:07:05.139 Compiler for C supports arguments -mavx512dq: YES 00:07:05.139 Compiler for C supports arguments -mavx512vl: YES 00:07:05.139 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:05.139 Compiler for C supports arguments -mavx2: YES 00:07:05.139 Compiler for C supports arguments -mavx: YES 00:07:05.139 Message: lib/net: Defining dependency "net" 00:07:05.139 Message: lib/meter: Defining dependency "meter" 00:07:05.139 Message: lib/ethdev: Defining dependency "ethdev" 00:07:05.139 Message: lib/pci: Defining dependency "pci" 00:07:05.139 Message: lib/cmdline: Defining dependency "cmdline" 00:07:05.139 Message: lib/hash: Defining dependency "hash" 00:07:05.139 Message: lib/timer: Defining dependency "timer" 00:07:05.139 Message: lib/compressdev: Defining dependency "compressdev" 00:07:05.139 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:05.139 Message: lib/dmadev: Defining dependency "dmadev" 00:07:05.139 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:05.139 Message: lib/power: Defining dependency "power" 00:07:05.139 Message: lib/reorder: Defining dependency "reorder" 00:07:05.139 Message: lib/security: Defining dependency "security" 00:07:05.139 Has header "linux/userfaultfd.h" : YES 00:07:05.139 Has header "linux/vduse.h" : YES 00:07:05.139 Message: lib/vhost: Defining dependency "vhost" 00:07:05.139 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:05.139 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:05.139 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:05.139 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:05.139 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:05.139 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:05.139 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:05.139 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:05.139 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:05.139 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:05.139 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:05.139 Configuring doxy-api-html.conf using configuration 00:07:05.139 Configuring doxy-api-man.conf using configuration 00:07:05.139 Program mandb found: YES (/usr/bin/mandb) 00:07:05.139 Program sphinx-build found: NO 00:07:05.139 Configuring rte_build_config.h using configuration 00:07:05.139 Message: 00:07:05.139 ================= 00:07:05.139 Applications Enabled 00:07:05.139 ================= 00:07:05.139 00:07:05.139 apps: 00:07:05.139 00:07:05.139 00:07:05.139 Message: 00:07:05.139 ================= 00:07:05.139 Libraries Enabled 00:07:05.139 ================= 00:07:05.139 00:07:05.139 libs: 00:07:05.139 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:05.139 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:05.140 cryptodev, dmadev, power, reorder, security, vhost, 00:07:05.140 00:07:05.140 Message: 00:07:05.140 =============== 00:07:05.140 Drivers Enabled 00:07:05.140 =============== 00:07:05.140 00:07:05.140 common: 00:07:05.140 00:07:05.140 bus: 00:07:05.140 pci, vdev, 00:07:05.140 mempool: 00:07:05.140 ring, 00:07:05.140 dma: 00:07:05.140 00:07:05.140 net: 00:07:05.140 00:07:05.140 crypto: 00:07:05.140 00:07:05.140 compress: 00:07:05.140 00:07:05.140 vdpa: 00:07:05.140 00:07:05.140 00:07:05.140 Message: 00:07:05.140 ================= 00:07:05.140 Content Skipped 00:07:05.140 ================= 00:07:05.140 00:07:05.140 apps: 00:07:05.140 dumpcap: explicitly disabled via build config 00:07:05.140 graph: explicitly disabled via build config 00:07:05.140 pdump: explicitly disabled via build config 00:07:05.140 proc-info: explicitly disabled via build config 00:07:05.140 test-acl: explicitly disabled via build config 00:07:05.140 test-bbdev: explicitly disabled via build config 00:07:05.140 test-cmdline: explicitly disabled via build config 00:07:05.140 test-compress-perf: explicitly disabled via build config 00:07:05.140 test-crypto-perf: explicitly disabled via build config 00:07:05.140 test-dma-perf: explicitly disabled via build config 00:07:05.140 test-eventdev: explicitly disabled via build config 00:07:05.140 test-fib: explicitly disabled via build config 00:07:05.140 test-flow-perf: explicitly disabled via build config 00:07:05.140 test-gpudev: explicitly disabled via build config 00:07:05.140 test-mldev: explicitly disabled via build config 00:07:05.140 test-pipeline: explicitly disabled via build config 00:07:05.140 test-pmd: explicitly disabled via build config 00:07:05.140 test-regex: explicitly disabled via build config 00:07:05.140 test-sad: explicitly disabled via build config 00:07:05.140 test-security-perf: explicitly disabled via build config 00:07:05.140 00:07:05.140 libs: 00:07:05.140 argparse: explicitly disabled via build config 00:07:05.140 metrics: explicitly disabled via build config 00:07:05.140 acl: explicitly disabled via build config 00:07:05.140 bbdev: explicitly disabled via build config 00:07:05.140 bitratestats: explicitly disabled via build config 00:07:05.140 bpf: explicitly disabled via build config 00:07:05.140 cfgfile: explicitly disabled via build config 00:07:05.140 distributor: explicitly disabled via build config 00:07:05.140 efd: explicitly disabled via build config 00:07:05.140 eventdev: explicitly disabled via build config 00:07:05.140 dispatcher: explicitly disabled via build config 00:07:05.140 gpudev: explicitly disabled via build config 00:07:05.140 gro: explicitly disabled via build config 00:07:05.140 gso: explicitly disabled via build config 00:07:05.140 ip_frag: explicitly disabled via build config 00:07:05.140 jobstats: explicitly disabled via build config 00:07:05.140 latencystats: explicitly disabled via build config 00:07:05.140 lpm: explicitly disabled via build config 00:07:05.140 member: explicitly disabled via build config 00:07:05.140 pcapng: explicitly disabled via build config 00:07:05.140 rawdev: explicitly disabled via build config 00:07:05.140 regexdev: explicitly disabled via build config 00:07:05.140 mldev: explicitly disabled via build config 00:07:05.140 rib: explicitly disabled via build config 00:07:05.140 sched: explicitly disabled via build config 00:07:05.140 stack: explicitly disabled via build config 00:07:05.140 ipsec: explicitly disabled via build config 00:07:05.140 pdcp: explicitly disabled via build config 00:07:05.140 fib: explicitly disabled via build config 00:07:05.140 port: explicitly disabled via build config 00:07:05.140 pdump: explicitly disabled via build config 00:07:05.140 table: explicitly disabled via build config 00:07:05.140 pipeline: explicitly disabled via build config 00:07:05.140 graph: explicitly disabled via build config 00:07:05.140 node: explicitly disabled via build config 00:07:05.140 00:07:05.140 drivers: 00:07:05.140 common/cpt: not in enabled drivers build config 00:07:05.140 common/dpaax: not in enabled drivers build config 00:07:05.140 common/iavf: not in enabled drivers build config 00:07:05.140 common/idpf: not in enabled drivers build config 00:07:05.140 common/ionic: not in enabled drivers build config 00:07:05.140 common/mvep: not in enabled drivers build config 00:07:05.140 common/octeontx: not in enabled drivers build config 00:07:05.140 bus/auxiliary: not in enabled drivers build config 00:07:05.140 bus/cdx: not in enabled drivers build config 00:07:05.140 bus/dpaa: not in enabled drivers build config 00:07:05.140 bus/fslmc: not in enabled drivers build config 00:07:05.140 bus/ifpga: not in enabled drivers build config 00:07:05.140 bus/platform: not in enabled drivers build config 00:07:05.140 bus/uacce: not in enabled drivers build config 00:07:05.140 bus/vmbus: not in enabled drivers build config 00:07:05.140 common/cnxk: not in enabled drivers build config 00:07:05.140 common/mlx5: not in enabled drivers build config 00:07:05.140 common/nfp: not in enabled drivers build config 00:07:05.140 common/nitrox: not in enabled drivers build config 00:07:05.140 common/qat: not in enabled drivers build config 00:07:05.140 common/sfc_efx: not in enabled drivers build config 00:07:05.140 mempool/bucket: not in enabled drivers build config 00:07:05.140 mempool/cnxk: not in enabled drivers build config 00:07:05.140 mempool/dpaa: not in enabled drivers build config 00:07:05.140 mempool/dpaa2: not in enabled drivers build config 00:07:05.140 mempool/octeontx: not in enabled drivers build config 00:07:05.140 mempool/stack: not in enabled drivers build config 00:07:05.140 dma/cnxk: not in enabled drivers build config 00:07:05.140 dma/dpaa: not in enabled drivers build config 00:07:05.140 dma/dpaa2: not in enabled drivers build config 00:07:05.140 dma/hisilicon: not in enabled drivers build config 00:07:05.140 dma/idxd: not in enabled drivers build config 00:07:05.140 dma/ioat: not in enabled drivers build config 00:07:05.140 dma/skeleton: not in enabled drivers build config 00:07:05.140 net/af_packet: not in enabled drivers build config 00:07:05.140 net/af_xdp: not in enabled drivers build config 00:07:05.140 net/ark: not in enabled drivers build config 00:07:05.140 net/atlantic: not in enabled drivers build config 00:07:05.140 net/avp: not in enabled drivers build config 00:07:05.140 net/axgbe: not in enabled drivers build config 00:07:05.140 net/bnx2x: not in enabled drivers build config 00:07:05.140 net/bnxt: not in enabled drivers build config 00:07:05.140 net/bonding: not in enabled drivers build config 00:07:05.140 net/cnxk: not in enabled drivers build config 00:07:05.140 net/cpfl: not in enabled drivers build config 00:07:05.140 net/cxgbe: not in enabled drivers build config 00:07:05.140 net/dpaa: not in enabled drivers build config 00:07:05.140 net/dpaa2: not in enabled drivers build config 00:07:05.140 net/e1000: not in enabled drivers build config 00:07:05.140 net/ena: not in enabled drivers build config 00:07:05.140 net/enetc: not in enabled drivers build config 00:07:05.140 net/enetfec: not in enabled drivers build config 00:07:05.140 net/enic: not in enabled drivers build config 00:07:05.140 net/failsafe: not in enabled drivers build config 00:07:05.140 net/fm10k: not in enabled drivers build config 00:07:05.140 net/gve: not in enabled drivers build config 00:07:05.140 net/hinic: not in enabled drivers build config 00:07:05.140 net/hns3: not in enabled drivers build config 00:07:05.140 net/i40e: not in enabled drivers build config 00:07:05.140 net/iavf: not in enabled drivers build config 00:07:05.140 net/ice: not in enabled drivers build config 00:07:05.140 net/idpf: not in enabled drivers build config 00:07:05.140 net/igc: not in enabled drivers build config 00:07:05.140 net/ionic: not in enabled drivers build config 00:07:05.140 net/ipn3ke: not in enabled drivers build config 00:07:05.140 net/ixgbe: not in enabled drivers build config 00:07:05.140 net/mana: not in enabled drivers build config 00:07:05.140 net/memif: not in enabled drivers build config 00:07:05.140 net/mlx4: not in enabled drivers build config 00:07:05.140 net/mlx5: not in enabled drivers build config 00:07:05.140 net/mvneta: not in enabled drivers build config 00:07:05.140 net/mvpp2: not in enabled drivers build config 00:07:05.140 net/netvsc: not in enabled drivers build config 00:07:05.140 net/nfb: not in enabled drivers build config 00:07:05.140 net/nfp: not in enabled drivers build config 00:07:05.140 net/ngbe: not in enabled drivers build config 00:07:05.140 net/null: not in enabled drivers build config 00:07:05.140 net/octeontx: not in enabled drivers build config 00:07:05.140 net/octeon_ep: not in enabled drivers build config 00:07:05.140 net/pcap: not in enabled drivers build config 00:07:05.140 net/pfe: not in enabled drivers build config 00:07:05.140 net/qede: not in enabled drivers build config 00:07:05.140 net/ring: not in enabled drivers build config 00:07:05.140 net/sfc: not in enabled drivers build config 00:07:05.140 net/softnic: not in enabled drivers build config 00:07:05.140 net/tap: not in enabled drivers build config 00:07:05.140 net/thunderx: not in enabled drivers build config 00:07:05.140 net/txgbe: not in enabled drivers build config 00:07:05.140 net/vdev_netvsc: not in enabled drivers build config 00:07:05.140 net/vhost: not in enabled drivers build config 00:07:05.140 net/virtio: not in enabled drivers build config 00:07:05.140 net/vmxnet3: not in enabled drivers build config 00:07:05.140 raw/*: missing internal dependency, "rawdev" 00:07:05.140 crypto/armv8: not in enabled drivers build config 00:07:05.140 crypto/bcmfs: not in enabled drivers build config 00:07:05.140 crypto/caam_jr: not in enabled drivers build config 00:07:05.140 crypto/ccp: not in enabled drivers build config 00:07:05.140 crypto/cnxk: not in enabled drivers build config 00:07:05.140 crypto/dpaa_sec: not in enabled drivers build config 00:07:05.140 crypto/dpaa2_sec: not in enabled drivers build config 00:07:05.140 crypto/ipsec_mb: not in enabled drivers build config 00:07:05.140 crypto/mlx5: not in enabled drivers build config 00:07:05.140 crypto/mvsam: not in enabled drivers build config 00:07:05.140 crypto/nitrox: not in enabled drivers build config 00:07:05.140 crypto/null: not in enabled drivers build config 00:07:05.140 crypto/octeontx: not in enabled drivers build config 00:07:05.141 crypto/openssl: not in enabled drivers build config 00:07:05.141 crypto/scheduler: not in enabled drivers build config 00:07:05.141 crypto/uadk: not in enabled drivers build config 00:07:05.141 crypto/virtio: not in enabled drivers build config 00:07:05.141 compress/isal: not in enabled drivers build config 00:07:05.141 compress/mlx5: not in enabled drivers build config 00:07:05.141 compress/nitrox: not in enabled drivers build config 00:07:05.141 compress/octeontx: not in enabled drivers build config 00:07:05.141 compress/zlib: not in enabled drivers build config 00:07:05.141 regex/*: missing internal dependency, "regexdev" 00:07:05.141 ml/*: missing internal dependency, "mldev" 00:07:05.141 vdpa/ifc: not in enabled drivers build config 00:07:05.141 vdpa/mlx5: not in enabled drivers build config 00:07:05.141 vdpa/nfp: not in enabled drivers build config 00:07:05.141 vdpa/sfc: not in enabled drivers build config 00:07:05.141 event/*: missing internal dependency, "eventdev" 00:07:05.141 baseband/*: missing internal dependency, "bbdev" 00:07:05.141 gpu/*: missing internal dependency, "gpudev" 00:07:05.141 00:07:05.141 00:07:05.141 Build targets in project: 85 00:07:05.141 00:07:05.141 DPDK 24.03.0 00:07:05.141 00:07:05.141 User defined options 00:07:05.141 buildtype : debug 00:07:05.141 default_library : shared 00:07:05.141 libdir : lib 00:07:05.141 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:05.141 b_sanitize : address 00:07:05.141 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:05.141 c_link_args : 00:07:05.141 cpu_instruction_set: native 00:07:05.141 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:05.141 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:05.141 enable_docs : false 00:07:05.141 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:05.141 enable_kmods : false 00:07:05.141 max_lcores : 128 00:07:05.141 tests : false 00:07:05.141 00:07:05.141 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:05.399 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:05.657 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:05.657 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:05.657 [3/268] Linking static target lib/librte_kvargs.a 00:07:05.657 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:05.657 [5/268] Linking static target lib/librte_log.a 00:07:05.657 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:06.223 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.223 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:06.223 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:06.480 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:06.480 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:06.480 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:06.481 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:06.738 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:06.738 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:06.738 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.738 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:06.738 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:06.738 [19/268] Linking static target lib/librte_telemetry.a 00:07:06.738 [20/268] Linking target lib/librte_log.so.24.1 00:07:07.302 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:07.302 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:07.302 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:07.302 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:07.560 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:07.560 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:07.560 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:07.560 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:07.817 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:07.817 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:07.817 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.817 [32/268] Linking target lib/librte_telemetry.so.24.1 00:07:08.074 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:08.074 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:08.074 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:08.331 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:08.331 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:08.588 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:08.588 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:08.588 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:08.845 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:08.845 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:08.846 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:09.154 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:09.412 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:09.412 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:09.671 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:09.671 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:09.671 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:09.950 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:09.950 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:09.950 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:09.950 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:10.210 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:10.470 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:10.470 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:10.730 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:10.730 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:10.990 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:10.990 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:10.990 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:10.990 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:10.990 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:11.250 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:11.250 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:11.509 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:11.509 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:11.509 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:11.509 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:11.767 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:11.768 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:11.768 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:12.027 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:12.027 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:12.027 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:12.027 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:12.293 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:12.293 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:12.293 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:12.568 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:12.568 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:12.568 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:12.826 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:12.826 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:13.084 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:13.084 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:13.084 [87/268] Linking static target lib/librte_ring.a 00:07:13.084 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:13.084 [89/268] Linking static target lib/librte_rcu.a 00:07:13.342 [90/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:13.342 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:13.601 [92/268] Linking static target lib/librte_eal.a 00:07:13.601 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:13.601 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:13.601 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:13.859 [96/268] Linking static target lib/librte_mempool.a 00:07:13.859 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.859 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.118 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:14.118 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:14.375 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:14.375 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:14.636 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:14.636 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:14.894 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:15.152 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:15.152 [107/268] Linking static target lib/librte_mbuf.a 00:07:15.152 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.152 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:15.152 [110/268] Linking static target lib/librte_meter.a 00:07:15.410 [111/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:15.410 [112/268] Linking static target lib/librte_net.a 00:07:15.410 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:15.410 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:15.977 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:15.977 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.977 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.235 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:16.493 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:16.493 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.493 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:17.064 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:17.335 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:17.335 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:17.335 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:17.593 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:17.593 [127/268] Linking static target lib/librte_pci.a 00:07:17.593 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:17.593 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:17.593 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:17.849 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:17.849 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:17.849 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.849 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:17.849 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:17.849 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:18.106 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:18.106 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:18.106 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:18.106 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:18.106 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:18.106 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:18.106 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:18.362 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:18.362 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:18.926 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:18.926 [147/268] Linking static target lib/librte_cmdline.a 00:07:18.926 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:19.489 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:19.489 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:19.489 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:19.489 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:19.489 [153/268] Linking static target lib/librte_timer.a 00:07:19.489 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:20.052 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:20.052 [156/268] Linking static target lib/librte_hash.a 00:07:20.309 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:20.310 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:20.310 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:20.310 [160/268] Linking static target lib/librte_ethdev.a 00:07:20.310 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.566 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:20.566 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:20.823 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:20.823 [165/268] Linking static target lib/librte_compressdev.a 00:07:21.082 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.082 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:21.082 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:21.340 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:21.340 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:21.340 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:21.598 [172/268] Linking static target lib/librte_dmadev.a 00:07:21.598 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.856 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:22.114 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.114 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:22.371 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:22.371 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:22.630 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:22.630 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:22.630 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.630 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:22.888 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:22.888 [184/268] Linking static target lib/librte_power.a 00:07:23.453 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:23.711 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:23.711 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:23.970 [188/268] Linking static target lib/librte_security.a 00:07:24.229 [189/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:24.229 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:24.229 [191/268] Linking static target lib/librte_cryptodev.a 00:07:24.229 [192/268] Linking static target lib/librte_reorder.a 00:07:24.488 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:24.746 [194/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.746 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:25.005 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.264 [197/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.522 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:26.095 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:26.355 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:26.355 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:26.646 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:26.646 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:26.646 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:26.919 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:27.178 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:27.438 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.438 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:27.438 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:27.697 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:27.697 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:27.956 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:27.956 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:27.956 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:27.956 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:27.956 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:27.956 [217/268] Linking static target drivers/librte_bus_pci.a 00:07:27.956 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:27.956 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:27.956 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:28.217 [221/268] Linking static target drivers/librte_bus_vdev.a 00:07:28.217 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:28.217 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:28.217 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:28.217 [225/268] Linking static target drivers/librte_mempool_ring.a 00:07:28.476 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:28.476 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.735 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.735 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.993 [230/268] Linking target lib/librte_eal.so.24.1 00:07:28.993 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:28.993 [232/268] Linking target lib/librte_ring.so.24.1 00:07:28.993 [233/268] Linking target lib/librte_meter.so.24.1 00:07:29.250 [234/268] Linking target lib/librte_dmadev.so.24.1 00:07:29.250 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:29.250 [236/268] Linking target lib/librte_timer.so.24.1 00:07:29.250 [237/268] Linking target lib/librte_pci.so.24.1 00:07:29.250 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:29.250 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:29.250 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:29.250 [241/268] Linking target lib/librte_mempool.so.24.1 00:07:29.250 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:29.250 [243/268] Linking target lib/librte_rcu.so.24.1 00:07:29.508 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:29.508 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:29.508 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:29.508 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:29.508 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:29.508 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:29.767 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:29.767 [251/268] Linking target lib/librte_reorder.so.24.1 00:07:29.767 [252/268] Linking target lib/librte_net.so.24.1 00:07:29.767 [253/268] Linking target lib/librte_compressdev.so.24.1 00:07:29.767 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:07:29.767 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:30.026 [256/268] Linking target lib/librte_cmdline.so.24.1 00:07:30.026 [257/268] Linking target lib/librte_hash.so.24.1 00:07:30.026 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:30.026 [259/268] Linking target lib/librte_security.so.24.1 00:07:30.026 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:30.593 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.851 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:30.851 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:31.108 [264/268] Linking target lib/librte_power.so.24.1 00:07:34.408 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:34.408 [266/268] Linking static target lib/librte_vhost.a 00:07:35.342 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.601 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:35.601 INFO: autodetecting backend as ninja 00:07:35.601 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:57.536 CC lib/ut/ut.o 00:07:57.536 CC lib/log/log.o 00:07:57.536 CC lib/log/log_deprecated.o 00:07:57.536 CC lib/log/log_flags.o 00:07:57.536 CC lib/ut_mock/mock.o 00:07:57.536 LIB libspdk_ut.a 00:07:57.536 LIB libspdk_log.a 00:07:57.536 LIB libspdk_ut_mock.a 00:07:57.536 SO libspdk_ut.so.2.0 00:07:57.536 SO libspdk_log.so.7.1 00:07:57.536 SO libspdk_ut_mock.so.6.0 00:07:57.536 SYMLINK libspdk_ut.so 00:07:57.536 SYMLINK libspdk_log.so 00:07:57.536 SYMLINK libspdk_ut_mock.so 00:07:57.793 CC lib/ioat/ioat.o 00:07:57.793 CXX lib/trace_parser/trace.o 00:07:57.793 CC lib/util/base64.o 00:07:57.793 CC lib/util/bit_array.o 00:07:57.793 CC lib/util/crc16.o 00:07:57.793 CC lib/util/cpuset.o 00:07:57.793 CC lib/util/crc32.o 00:07:57.793 CC lib/util/crc32c.o 00:07:57.793 CC lib/dma/dma.o 00:07:57.793 CC lib/vfio_user/host/vfio_user_pci.o 00:07:57.793 CC lib/util/crc32_ieee.o 00:07:57.793 CC lib/util/crc64.o 00:07:58.051 CC lib/util/dif.o 00:07:58.051 LIB libspdk_dma.a 00:07:58.051 CC lib/util/fd.o 00:07:58.051 CC lib/util/fd_group.o 00:07:58.051 SO libspdk_dma.so.5.0 00:07:58.051 LIB libspdk_ioat.a 00:07:58.051 CC lib/vfio_user/host/vfio_user.o 00:07:58.051 SYMLINK libspdk_dma.so 00:07:58.051 SO libspdk_ioat.so.7.0 00:07:58.051 CC lib/util/file.o 00:07:58.051 CC lib/util/hexlify.o 00:07:58.051 CC lib/util/iov.o 00:07:58.051 SYMLINK libspdk_ioat.so 00:07:58.051 CC lib/util/math.o 00:07:58.051 CC lib/util/net.o 00:07:58.308 CC lib/util/pipe.o 00:07:58.308 CC lib/util/strerror_tls.o 00:07:58.308 CC lib/util/string.o 00:07:58.308 LIB libspdk_vfio_user.a 00:07:58.308 CC lib/util/uuid.o 00:07:58.308 SO libspdk_vfio_user.so.5.0 00:07:58.308 CC lib/util/xor.o 00:07:58.308 CC lib/util/zipf.o 00:07:58.308 SYMLINK libspdk_vfio_user.so 00:07:58.308 CC lib/util/md5.o 00:07:58.871 LIB libspdk_util.a 00:07:59.129 SO libspdk_util.so.10.1 00:07:59.387 SYMLINK libspdk_util.so 00:07:59.387 LIB libspdk_trace_parser.a 00:07:59.387 SO libspdk_trace_parser.so.6.0 00:07:59.387 CC lib/json/json_parse.o 00:07:59.387 CC lib/json/json_util.o 00:07:59.387 CC lib/json/json_write.o 00:07:59.387 CC lib/idxd/idxd.o 00:07:59.387 CC lib/idxd/idxd_user.o 00:07:59.387 CC lib/rdma_utils/rdma_utils.o 00:07:59.387 SYMLINK libspdk_trace_parser.so 00:07:59.387 CC lib/vmd/vmd.o 00:07:59.387 CC lib/conf/conf.o 00:07:59.387 CC lib/vmd/led.o 00:07:59.387 CC lib/env_dpdk/env.o 00:07:59.645 CC lib/idxd/idxd_kernel.o 00:07:59.904 CC lib/env_dpdk/memory.o 00:07:59.904 LIB libspdk_rdma_utils.a 00:07:59.904 CC lib/env_dpdk/pci.o 00:07:59.904 SO libspdk_rdma_utils.so.1.0 00:07:59.904 LIB libspdk_json.a 00:07:59.904 CC lib/env_dpdk/init.o 00:07:59.904 SO libspdk_json.so.6.0 00:07:59.904 SYMLINK libspdk_rdma_utils.so 00:07:59.904 CC lib/env_dpdk/threads.o 00:07:59.904 SYMLINK libspdk_json.so 00:07:59.904 CC lib/env_dpdk/pci_ioat.o 00:07:59.904 LIB libspdk_conf.a 00:08:00.162 SO libspdk_conf.so.6.0 00:08:00.162 CC lib/env_dpdk/pci_virtio.o 00:08:00.162 CC lib/env_dpdk/pci_vmd.o 00:08:00.162 SYMLINK libspdk_conf.so 00:08:00.162 CC lib/rdma_provider/common.o 00:08:00.162 CC lib/env_dpdk/pci_idxd.o 00:08:00.421 CC lib/env_dpdk/pci_event.o 00:08:00.421 CC lib/env_dpdk/sigbus_handler.o 00:08:00.421 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:00.421 LIB libspdk_idxd.a 00:08:00.421 CC lib/jsonrpc/jsonrpc_server.o 00:08:00.421 SO libspdk_idxd.so.12.1 00:08:00.421 CC lib/env_dpdk/pci_dpdk.o 00:08:00.421 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:00.421 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:00.421 SYMLINK libspdk_idxd.so 00:08:00.421 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:00.421 LIB libspdk_vmd.a 00:08:00.421 CC lib/jsonrpc/jsonrpc_client.o 00:08:00.680 SO libspdk_vmd.so.6.0 00:08:00.680 SYMLINK libspdk_vmd.so 00:08:00.680 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:00.680 LIB libspdk_rdma_provider.a 00:08:00.680 SO libspdk_rdma_provider.so.7.0 00:08:00.939 SYMLINK libspdk_rdma_provider.so 00:08:00.939 LIB libspdk_jsonrpc.a 00:08:00.939 SO libspdk_jsonrpc.so.6.0 00:08:01.197 SYMLINK libspdk_jsonrpc.so 00:08:01.456 CC lib/rpc/rpc.o 00:08:01.714 LIB libspdk_rpc.a 00:08:01.714 SO libspdk_rpc.so.6.0 00:08:01.714 LIB libspdk_env_dpdk.a 00:08:01.714 SYMLINK libspdk_rpc.so 00:08:01.974 SO libspdk_env_dpdk.so.15.1 00:08:01.974 CC lib/keyring/keyring_rpc.o 00:08:01.974 CC lib/notify/notify.o 00:08:01.974 CC lib/keyring/keyring.o 00:08:01.974 CC lib/notify/notify_rpc.o 00:08:01.974 CC lib/trace/trace.o 00:08:01.974 CC lib/trace/trace_flags.o 00:08:01.974 CC lib/trace/trace_rpc.o 00:08:01.974 SYMLINK libspdk_env_dpdk.so 00:08:02.233 LIB libspdk_notify.a 00:08:02.233 SO libspdk_notify.so.6.0 00:08:02.233 SYMLINK libspdk_notify.so 00:08:02.233 LIB libspdk_keyring.a 00:08:02.491 LIB libspdk_trace.a 00:08:02.491 SO libspdk_keyring.so.2.0 00:08:02.491 SO libspdk_trace.so.11.0 00:08:02.491 SYMLINK libspdk_keyring.so 00:08:02.491 SYMLINK libspdk_trace.so 00:08:02.750 CC lib/thread/thread.o 00:08:02.750 CC lib/thread/iobuf.o 00:08:02.750 CC lib/sock/sock.o 00:08:02.750 CC lib/sock/sock_rpc.o 00:08:03.318 LIB libspdk_sock.a 00:08:03.577 SO libspdk_sock.so.10.0 00:08:03.577 SYMLINK libspdk_sock.so 00:08:03.836 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:03.836 CC lib/nvme/nvme_ctrlr.o 00:08:03.836 CC lib/nvme/nvme_fabric.o 00:08:03.836 CC lib/nvme/nvme_ns_cmd.o 00:08:03.836 CC lib/nvme/nvme_ns.o 00:08:03.836 CC lib/nvme/nvme_pcie_common.o 00:08:03.836 CC lib/nvme/nvme_pcie.o 00:08:03.836 CC lib/nvme/nvme.o 00:08:03.836 CC lib/nvme/nvme_qpair.o 00:08:04.834 CC lib/nvme/nvme_quirks.o 00:08:04.834 CC lib/nvme/nvme_transport.o 00:08:04.834 CC lib/nvme/nvme_discovery.o 00:08:04.834 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:05.093 LIB libspdk_thread.a 00:08:05.093 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:05.093 SO libspdk_thread.so.11.0 00:08:05.093 SYMLINK libspdk_thread.so 00:08:05.352 CC lib/nvme/nvme_tcp.o 00:08:05.352 CC lib/nvme/nvme_opal.o 00:08:05.352 CC lib/accel/accel.o 00:08:05.352 CC lib/nvme/nvme_io_msg.o 00:08:05.611 CC lib/blob/blobstore.o 00:08:05.611 CC lib/init/json_config.o 00:08:05.870 CC lib/nvme/nvme_poll_group.o 00:08:05.870 CC lib/init/subsystem.o 00:08:05.870 CC lib/virtio/virtio.o 00:08:06.128 CC lib/fsdev/fsdev.o 00:08:06.128 CC lib/fsdev/fsdev_io.o 00:08:06.128 CC lib/init/subsystem_rpc.o 00:08:06.386 CC lib/virtio/virtio_vhost_user.o 00:08:06.386 CC lib/fsdev/fsdev_rpc.o 00:08:06.645 CC lib/init/rpc.o 00:08:06.645 CC lib/accel/accel_rpc.o 00:08:06.645 CC lib/accel/accel_sw.o 00:08:06.645 CC lib/blob/request.o 00:08:06.645 LIB libspdk_init.a 00:08:06.904 SO libspdk_init.so.6.0 00:08:06.904 CC lib/blob/zeroes.o 00:08:06.904 SYMLINK libspdk_init.so 00:08:06.904 CC lib/blob/blob_bs_dev.o 00:08:06.904 LIB libspdk_fsdev.a 00:08:06.904 CC lib/virtio/virtio_vfio_user.o 00:08:06.904 SO libspdk_fsdev.so.2.0 00:08:06.904 CC lib/nvme/nvme_zns.o 00:08:07.163 CC lib/virtio/virtio_pci.o 00:08:07.163 LIB libspdk_accel.a 00:08:07.163 SYMLINK libspdk_fsdev.so 00:08:07.163 SO libspdk_accel.so.16.0 00:08:07.163 CC lib/nvme/nvme_stubs.o 00:08:07.163 SYMLINK libspdk_accel.so 00:08:07.163 CC lib/nvme/nvme_auth.o 00:08:07.163 CC lib/event/app.o 00:08:07.422 CC lib/event/reactor.o 00:08:07.422 CC lib/nvme/nvme_cuse.o 00:08:07.422 CC lib/nvme/nvme_rdma.o 00:08:07.422 CC lib/event/log_rpc.o 00:08:07.422 LIB libspdk_virtio.a 00:08:07.422 SO libspdk_virtio.so.7.0 00:08:07.681 CC lib/event/app_rpc.o 00:08:07.681 SYMLINK libspdk_virtio.so 00:08:07.939 CC lib/event/scheduler_static.o 00:08:07.939 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:07.939 CC lib/bdev/bdev.o 00:08:07.939 CC lib/bdev/bdev_rpc.o 00:08:07.939 CC lib/bdev/part.o 00:08:07.939 CC lib/bdev/bdev_zone.o 00:08:07.939 LIB libspdk_event.a 00:08:08.197 SO libspdk_event.so.14.0 00:08:08.197 CC lib/bdev/scsi_nvme.o 00:08:08.197 SYMLINK libspdk_event.so 00:08:08.763 LIB libspdk_fuse_dispatcher.a 00:08:09.021 SO libspdk_fuse_dispatcher.so.1.0 00:08:09.021 SYMLINK libspdk_fuse_dispatcher.so 00:08:09.280 LIB libspdk_nvme.a 00:08:09.538 SO libspdk_nvme.so.15.0 00:08:09.795 SYMLINK libspdk_nvme.so 00:08:10.361 LIB libspdk_blob.a 00:08:10.361 SO libspdk_blob.so.12.0 00:08:10.361 SYMLINK libspdk_blob.so 00:08:10.619 CC lib/blobfs/tree.o 00:08:10.619 CC lib/blobfs/blobfs.o 00:08:10.619 CC lib/lvol/lvol.o 00:08:11.991 LIB libspdk_bdev.a 00:08:11.991 LIB libspdk_blobfs.a 00:08:11.992 SO libspdk_blobfs.so.11.0 00:08:11.992 SO libspdk_bdev.so.17.0 00:08:11.992 SYMLINK libspdk_blobfs.so 00:08:11.992 SYMLINK libspdk_bdev.so 00:08:11.992 LIB libspdk_lvol.a 00:08:12.249 SO libspdk_lvol.so.11.0 00:08:12.249 SYMLINK libspdk_lvol.so 00:08:12.249 CC lib/nbd/nbd.o 00:08:12.249 CC lib/nbd/nbd_rpc.o 00:08:12.249 CC lib/ftl/ftl_core.o 00:08:12.249 CC lib/ftl/ftl_init.o 00:08:12.249 CC lib/scsi/dev.o 00:08:12.249 CC lib/ftl/ftl_layout.o 00:08:12.249 CC lib/ftl/ftl_debug.o 00:08:12.249 CC lib/ftl/ftl_io.o 00:08:12.249 CC lib/ublk/ublk.o 00:08:12.249 CC lib/nvmf/ctrlr.o 00:08:12.507 CC lib/ublk/ublk_rpc.o 00:08:12.507 CC lib/ftl/ftl_sb.o 00:08:12.507 CC lib/ftl/ftl_l2p.o 00:08:12.507 CC lib/scsi/lun.o 00:08:12.765 CC lib/ftl/ftl_l2p_flat.o 00:08:12.765 CC lib/ftl/ftl_nv_cache.o 00:08:12.765 CC lib/ftl/ftl_band.o 00:08:12.765 CC lib/scsi/port.o 00:08:12.765 LIB libspdk_nbd.a 00:08:12.765 CC lib/ftl/ftl_band_ops.o 00:08:12.765 CC lib/nvmf/ctrlr_discovery.o 00:08:12.765 SO libspdk_nbd.so.7.0 00:08:13.023 CC lib/ftl/ftl_writer.o 00:08:13.023 SYMLINK libspdk_nbd.so 00:08:13.023 CC lib/ftl/ftl_rq.o 00:08:13.023 CC lib/scsi/scsi.o 00:08:13.023 CC lib/nvmf/ctrlr_bdev.o 00:08:13.281 CC lib/scsi/scsi_bdev.o 00:08:13.281 CC lib/ftl/ftl_reloc.o 00:08:13.281 LIB libspdk_ublk.a 00:08:13.281 CC lib/nvmf/subsystem.o 00:08:13.281 CC lib/nvmf/nvmf.o 00:08:13.281 CC lib/nvmf/nvmf_rpc.o 00:08:13.281 SO libspdk_ublk.so.3.0 00:08:13.281 SYMLINK libspdk_ublk.so 00:08:13.281 CC lib/ftl/ftl_l2p_cache.o 00:08:13.539 CC lib/nvmf/transport.o 00:08:13.539 CC lib/nvmf/tcp.o 00:08:13.796 CC lib/scsi/scsi_pr.o 00:08:14.055 CC lib/nvmf/stubs.o 00:08:14.055 CC lib/nvmf/mdns_server.o 00:08:14.055 CC lib/ftl/ftl_p2l.o 00:08:14.313 CC lib/scsi/scsi_rpc.o 00:08:14.313 CC lib/scsi/task.o 00:08:14.313 CC lib/nvmf/rdma.o 00:08:14.571 CC lib/nvmf/auth.o 00:08:14.571 CC lib/ftl/ftl_p2l_log.o 00:08:14.571 CC lib/ftl/mngt/ftl_mngt.o 00:08:14.571 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:14.571 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:14.571 LIB libspdk_scsi.a 00:08:14.830 SO libspdk_scsi.so.9.0 00:08:14.830 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:14.830 SYMLINK libspdk_scsi.so 00:08:14.830 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:14.830 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:14.830 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:15.087 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:15.087 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:15.087 CC lib/iscsi/conn.o 00:08:15.087 CC lib/vhost/vhost.o 00:08:15.345 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:15.345 CC lib/vhost/vhost_rpc.o 00:08:15.345 CC lib/vhost/vhost_scsi.o 00:08:15.345 CC lib/vhost/vhost_blk.o 00:08:15.603 CC lib/vhost/rte_vhost_user.o 00:08:15.603 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:15.603 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:15.861 CC lib/iscsi/init_grp.o 00:08:15.861 CC lib/iscsi/iscsi.o 00:08:15.861 CC lib/iscsi/param.o 00:08:16.120 CC lib/iscsi/portal_grp.o 00:08:16.120 CC lib/iscsi/tgt_node.o 00:08:16.120 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:16.379 CC lib/ftl/utils/ftl_conf.o 00:08:16.379 CC lib/iscsi/iscsi_subsystem.o 00:08:16.379 CC lib/iscsi/iscsi_rpc.o 00:08:16.379 CC lib/iscsi/task.o 00:08:16.379 CC lib/ftl/utils/ftl_md.o 00:08:16.637 CC lib/ftl/utils/ftl_mempool.o 00:08:16.637 CC lib/ftl/utils/ftl_bitmap.o 00:08:16.637 CC lib/ftl/utils/ftl_property.o 00:08:16.637 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:16.637 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:16.637 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:16.894 LIB libspdk_vhost.a 00:08:16.894 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:16.894 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:16.894 SO libspdk_vhost.so.8.0 00:08:16.894 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:16.894 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:16.894 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:17.153 SYMLINK libspdk_vhost.so 00:08:17.153 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:17.153 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:17.153 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:17.153 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:17.153 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:17.153 CC lib/ftl/base/ftl_base_dev.o 00:08:17.153 CC lib/ftl/base/ftl_base_bdev.o 00:08:17.153 CC lib/ftl/ftl_trace.o 00:08:17.720 LIB libspdk_nvmf.a 00:08:17.720 LIB libspdk_ftl.a 00:08:17.720 SO libspdk_nvmf.so.20.0 00:08:17.979 SO libspdk_ftl.so.9.0 00:08:17.979 LIB libspdk_iscsi.a 00:08:17.979 SYMLINK libspdk_nvmf.so 00:08:18.237 SO libspdk_iscsi.so.8.0 00:08:18.237 SYMLINK libspdk_ftl.so 00:08:18.494 SYMLINK libspdk_iscsi.so 00:08:18.751 CC module/env_dpdk/env_dpdk_rpc.o 00:08:19.009 CC module/accel/dsa/accel_dsa.o 00:08:19.009 CC module/keyring/file/keyring.o 00:08:19.009 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:19.009 CC module/fsdev/aio/fsdev_aio.o 00:08:19.009 CC module/blob/bdev/blob_bdev.o 00:08:19.009 CC module/accel/ioat/accel_ioat.o 00:08:19.009 CC module/accel/error/accel_error.o 00:08:19.009 CC module/accel/iaa/accel_iaa.o 00:08:19.009 CC module/sock/posix/posix.o 00:08:19.009 LIB libspdk_env_dpdk_rpc.a 00:08:19.009 SO libspdk_env_dpdk_rpc.so.6.0 00:08:19.009 CC module/keyring/file/keyring_rpc.o 00:08:19.009 SYMLINK libspdk_env_dpdk_rpc.so 00:08:19.009 CC module/accel/iaa/accel_iaa_rpc.o 00:08:19.267 LIB libspdk_scheduler_dynamic.a 00:08:19.267 CC module/accel/ioat/accel_ioat_rpc.o 00:08:19.267 SO libspdk_scheduler_dynamic.so.4.0 00:08:19.267 LIB libspdk_keyring_file.a 00:08:19.267 CC module/accel/error/accel_error_rpc.o 00:08:19.267 CC module/accel/dsa/accel_dsa_rpc.o 00:08:19.267 LIB libspdk_accel_iaa.a 00:08:19.267 SO libspdk_keyring_file.so.2.0 00:08:19.267 SYMLINK libspdk_scheduler_dynamic.so 00:08:19.267 LIB libspdk_blob_bdev.a 00:08:19.267 SO libspdk_accel_iaa.so.3.0 00:08:19.267 LIB libspdk_accel_ioat.a 00:08:19.267 SO libspdk_blob_bdev.so.12.0 00:08:19.267 SYMLINK libspdk_keyring_file.so 00:08:19.524 LIB libspdk_accel_error.a 00:08:19.524 SO libspdk_accel_ioat.so.6.0 00:08:19.524 LIB libspdk_accel_dsa.a 00:08:19.524 SYMLINK libspdk_accel_iaa.so 00:08:19.524 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:19.524 SO libspdk_accel_error.so.2.0 00:08:19.524 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:19.524 SYMLINK libspdk_blob_bdev.so 00:08:19.524 SO libspdk_accel_dsa.so.5.0 00:08:19.524 CC module/fsdev/aio/linux_aio_mgr.o 00:08:19.524 SYMLINK libspdk_accel_ioat.so 00:08:19.524 CC module/scheduler/gscheduler/gscheduler.o 00:08:19.524 SYMLINK libspdk_accel_dsa.so 00:08:19.524 SYMLINK libspdk_accel_error.so 00:08:19.524 CC module/keyring/linux/keyring.o 00:08:19.524 CC module/keyring/linux/keyring_rpc.o 00:08:19.781 LIB libspdk_scheduler_dpdk_governor.a 00:08:19.781 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:19.781 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:19.781 LIB libspdk_scheduler_gscheduler.a 00:08:19.781 LIB libspdk_fsdev_aio.a 00:08:19.781 SO libspdk_scheduler_gscheduler.so.4.0 00:08:19.781 CC module/bdev/error/vbdev_error.o 00:08:19.781 LIB libspdk_keyring_linux.a 00:08:19.781 CC module/bdev/delay/vbdev_delay.o 00:08:19.781 SO libspdk_fsdev_aio.so.1.0 00:08:20.038 CC module/blobfs/bdev/blobfs_bdev.o 00:08:20.038 SO libspdk_keyring_linux.so.1.0 00:08:20.038 LIB libspdk_sock_posix.a 00:08:20.038 SYMLINK libspdk_scheduler_gscheduler.so 00:08:20.038 CC module/bdev/error/vbdev_error_rpc.o 00:08:20.038 CC module/bdev/gpt/gpt.o 00:08:20.038 SYMLINK libspdk_keyring_linux.so 00:08:20.038 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:20.038 CC module/bdev/lvol/vbdev_lvol.o 00:08:20.038 SO libspdk_sock_posix.so.6.0 00:08:20.038 SYMLINK libspdk_fsdev_aio.so 00:08:20.038 CC module/bdev/malloc/bdev_malloc.o 00:08:20.038 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:20.295 SYMLINK libspdk_sock_posix.so 00:08:20.295 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:20.295 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:20.295 LIB libspdk_blobfs_bdev.a 00:08:20.295 CC module/bdev/gpt/vbdev_gpt.o 00:08:20.295 LIB libspdk_bdev_error.a 00:08:20.295 SO libspdk_blobfs_bdev.so.6.0 00:08:20.295 SO libspdk_bdev_error.so.6.0 00:08:20.295 SYMLINK libspdk_bdev_error.so 00:08:20.295 SYMLINK libspdk_blobfs_bdev.so 00:08:20.554 LIB libspdk_bdev_delay.a 00:08:20.554 SO libspdk_bdev_delay.so.6.0 00:08:20.554 CC module/bdev/null/bdev_null.o 00:08:20.554 SYMLINK libspdk_bdev_delay.so 00:08:20.554 CC module/bdev/nvme/bdev_nvme.o 00:08:20.554 LIB libspdk_bdev_malloc.a 00:08:20.554 CC module/bdev/passthru/vbdev_passthru.o 00:08:20.554 LIB libspdk_bdev_gpt.a 00:08:20.554 CC module/bdev/raid/bdev_raid.o 00:08:20.554 SO libspdk_bdev_gpt.so.6.0 00:08:20.554 CC module/bdev/split/vbdev_split.o 00:08:20.554 SO libspdk_bdev_malloc.so.6.0 00:08:20.812 CC module/bdev/split/vbdev_split_rpc.o 00:08:20.812 SYMLINK libspdk_bdev_malloc.so 00:08:20.812 CC module/bdev/null/bdev_null_rpc.o 00:08:20.812 SYMLINK libspdk_bdev_gpt.so 00:08:20.812 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:20.812 LIB libspdk_bdev_lvol.a 00:08:20.812 SO libspdk_bdev_lvol.so.6.0 00:08:20.812 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:20.812 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:21.070 LIB libspdk_bdev_split.a 00:08:21.070 SYMLINK libspdk_bdev_lvol.so 00:08:21.070 CC module/bdev/raid/bdev_raid_rpc.o 00:08:21.070 CC module/bdev/xnvme/bdev_xnvme.o 00:08:21.070 SO libspdk_bdev_split.so.6.0 00:08:21.070 LIB libspdk_bdev_null.a 00:08:21.070 SO libspdk_bdev_null.so.6.0 00:08:21.070 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:21.070 SYMLINK libspdk_bdev_split.so 00:08:21.070 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:08:21.070 CC module/bdev/raid/bdev_raid_sb.o 00:08:21.070 SYMLINK libspdk_bdev_null.so 00:08:21.328 LIB libspdk_bdev_zone_block.a 00:08:21.328 LIB libspdk_bdev_passthru.a 00:08:21.328 SO libspdk_bdev_zone_block.so.6.0 00:08:21.328 SO libspdk_bdev_passthru.so.6.0 00:08:21.328 CC module/bdev/raid/raid0.o 00:08:21.328 CC module/bdev/aio/bdev_aio.o 00:08:21.328 SYMLINK libspdk_bdev_zone_block.so 00:08:21.328 SYMLINK libspdk_bdev_passthru.so 00:08:21.328 CC module/bdev/raid/raid1.o 00:08:21.328 LIB libspdk_bdev_xnvme.a 00:08:21.585 CC module/bdev/ftl/bdev_ftl.o 00:08:21.585 SO libspdk_bdev_xnvme.so.3.0 00:08:21.585 CC module/bdev/nvme/nvme_rpc.o 00:08:21.585 SYMLINK libspdk_bdev_xnvme.so 00:08:21.585 CC module/bdev/raid/concat.o 00:08:21.585 CC module/bdev/iscsi/bdev_iscsi.o 00:08:21.841 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:21.841 CC module/bdev/nvme/bdev_mdns_client.o 00:08:21.841 CC module/bdev/aio/bdev_aio_rpc.o 00:08:21.841 CC module/bdev/nvme/vbdev_opal.o 00:08:21.841 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:21.841 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:21.841 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:22.100 LIB libspdk_bdev_aio.a 00:08:22.100 SO libspdk_bdev_aio.so.6.0 00:08:22.100 LIB libspdk_bdev_raid.a 00:08:22.100 LIB libspdk_bdev_iscsi.a 00:08:22.100 SYMLINK libspdk_bdev_aio.so 00:08:22.100 LIB libspdk_bdev_ftl.a 00:08:22.100 SO libspdk_bdev_iscsi.so.6.0 00:08:22.100 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:22.100 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:22.100 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:22.100 SO libspdk_bdev_raid.so.6.0 00:08:22.100 SO libspdk_bdev_ftl.so.6.0 00:08:22.357 SYMLINK libspdk_bdev_iscsi.so 00:08:22.357 SYMLINK libspdk_bdev_ftl.so 00:08:22.357 SYMLINK libspdk_bdev_raid.so 00:08:22.922 LIB libspdk_bdev_virtio.a 00:08:22.922 SO libspdk_bdev_virtio.so.6.0 00:08:22.922 SYMLINK libspdk_bdev_virtio.so 00:08:24.852 LIB libspdk_bdev_nvme.a 00:08:24.852 SO libspdk_bdev_nvme.so.7.1 00:08:24.852 SYMLINK libspdk_bdev_nvme.so 00:08:25.110 CC module/event/subsystems/scheduler/scheduler.o 00:08:25.110 CC module/event/subsystems/keyring/keyring.o 00:08:25.110 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:25.110 CC module/event/subsystems/vmd/vmd.o 00:08:25.110 CC module/event/subsystems/iobuf/iobuf.o 00:08:25.110 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:25.110 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:25.110 CC module/event/subsystems/fsdev/fsdev.o 00:08:25.110 CC module/event/subsystems/sock/sock.o 00:08:25.368 LIB libspdk_event_keyring.a 00:08:25.368 LIB libspdk_event_scheduler.a 00:08:25.368 LIB libspdk_event_vhost_blk.a 00:08:25.368 SO libspdk_event_keyring.so.1.0 00:08:25.368 LIB libspdk_event_fsdev.a 00:08:25.368 SO libspdk_event_scheduler.so.4.0 00:08:25.368 SO libspdk_event_vhost_blk.so.3.0 00:08:25.368 LIB libspdk_event_iobuf.a 00:08:25.368 LIB libspdk_event_vmd.a 00:08:25.368 LIB libspdk_event_sock.a 00:08:25.368 SO libspdk_event_fsdev.so.1.0 00:08:25.368 SO libspdk_event_iobuf.so.3.0 00:08:25.368 SO libspdk_event_sock.so.5.0 00:08:25.368 SO libspdk_event_vmd.so.6.0 00:08:25.368 SYMLINK libspdk_event_keyring.so 00:08:25.368 SYMLINK libspdk_event_scheduler.so 00:08:25.368 SYMLINK libspdk_event_vhost_blk.so 00:08:25.368 SYMLINK libspdk_event_fsdev.so 00:08:25.368 SYMLINK libspdk_event_iobuf.so 00:08:25.368 SYMLINK libspdk_event_vmd.so 00:08:25.368 SYMLINK libspdk_event_sock.so 00:08:25.626 CC module/event/subsystems/accel/accel.o 00:08:25.883 LIB libspdk_event_accel.a 00:08:25.883 SO libspdk_event_accel.so.6.0 00:08:26.142 SYMLINK libspdk_event_accel.so 00:08:26.400 CC module/event/subsystems/bdev/bdev.o 00:08:26.659 LIB libspdk_event_bdev.a 00:08:26.659 SO libspdk_event_bdev.so.6.0 00:08:26.659 SYMLINK libspdk_event_bdev.so 00:08:26.917 CC module/event/subsystems/nbd/nbd.o 00:08:26.917 CC module/event/subsystems/scsi/scsi.o 00:08:26.917 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:26.917 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:26.917 CC module/event/subsystems/ublk/ublk.o 00:08:26.917 LIB libspdk_event_nbd.a 00:08:27.175 LIB libspdk_event_ublk.a 00:08:27.175 SO libspdk_event_nbd.so.6.0 00:08:27.175 LIB libspdk_event_scsi.a 00:08:27.175 SO libspdk_event_ublk.so.3.0 00:08:27.175 LIB libspdk_event_nvmf.a 00:08:27.175 SO libspdk_event_scsi.so.6.0 00:08:27.175 SYMLINK libspdk_event_nbd.so 00:08:27.175 SO libspdk_event_nvmf.so.6.0 00:08:27.175 SYMLINK libspdk_event_ublk.so 00:08:27.175 SYMLINK libspdk_event_scsi.so 00:08:27.175 SYMLINK libspdk_event_nvmf.so 00:08:27.435 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:27.435 CC module/event/subsystems/iscsi/iscsi.o 00:08:27.694 LIB libspdk_event_vhost_scsi.a 00:08:27.694 LIB libspdk_event_iscsi.a 00:08:27.694 SO libspdk_event_vhost_scsi.so.3.0 00:08:27.694 SO libspdk_event_iscsi.so.6.0 00:08:27.694 SYMLINK libspdk_event_vhost_scsi.so 00:08:27.694 SYMLINK libspdk_event_iscsi.so 00:08:27.953 SO libspdk.so.6.0 00:08:27.953 SYMLINK libspdk.so 00:08:28.212 CXX app/trace/trace.o 00:08:28.213 TEST_HEADER include/spdk/accel.h 00:08:28.213 CC test/rpc_client/rpc_client_test.o 00:08:28.213 TEST_HEADER include/spdk/accel_module.h 00:08:28.213 TEST_HEADER include/spdk/assert.h 00:08:28.213 TEST_HEADER include/spdk/barrier.h 00:08:28.213 TEST_HEADER include/spdk/base64.h 00:08:28.213 TEST_HEADER include/spdk/bdev.h 00:08:28.213 TEST_HEADER include/spdk/bdev_module.h 00:08:28.213 TEST_HEADER include/spdk/bdev_zone.h 00:08:28.213 TEST_HEADER include/spdk/bit_array.h 00:08:28.213 TEST_HEADER include/spdk/bit_pool.h 00:08:28.213 TEST_HEADER include/spdk/blob_bdev.h 00:08:28.213 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:28.213 TEST_HEADER include/spdk/blobfs.h 00:08:28.213 TEST_HEADER include/spdk/blob.h 00:08:28.213 TEST_HEADER include/spdk/conf.h 00:08:28.213 TEST_HEADER include/spdk/config.h 00:08:28.213 TEST_HEADER include/spdk/cpuset.h 00:08:28.213 TEST_HEADER include/spdk/crc16.h 00:08:28.213 TEST_HEADER include/spdk/crc32.h 00:08:28.213 TEST_HEADER include/spdk/crc64.h 00:08:28.213 TEST_HEADER include/spdk/dif.h 00:08:28.213 TEST_HEADER include/spdk/dma.h 00:08:28.213 TEST_HEADER include/spdk/endian.h 00:08:28.213 TEST_HEADER include/spdk/env_dpdk.h 00:08:28.213 TEST_HEADER include/spdk/env.h 00:08:28.213 TEST_HEADER include/spdk/event.h 00:08:28.213 TEST_HEADER include/spdk/fd_group.h 00:08:28.213 TEST_HEADER include/spdk/fd.h 00:08:28.213 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:28.213 TEST_HEADER include/spdk/file.h 00:08:28.213 TEST_HEADER include/spdk/fsdev.h 00:08:28.213 TEST_HEADER include/spdk/fsdev_module.h 00:08:28.213 TEST_HEADER include/spdk/ftl.h 00:08:28.213 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:28.213 TEST_HEADER include/spdk/gpt_spec.h 00:08:28.213 TEST_HEADER include/spdk/hexlify.h 00:08:28.213 TEST_HEADER include/spdk/histogram_data.h 00:08:28.213 TEST_HEADER include/spdk/idxd.h 00:08:28.213 TEST_HEADER include/spdk/idxd_spec.h 00:08:28.213 CC examples/util/zipf/zipf.o 00:08:28.213 TEST_HEADER include/spdk/init.h 00:08:28.213 TEST_HEADER include/spdk/ioat.h 00:08:28.213 CC examples/ioat/perf/perf.o 00:08:28.213 CC test/thread/poller_perf/poller_perf.o 00:08:28.213 TEST_HEADER include/spdk/ioat_spec.h 00:08:28.213 TEST_HEADER include/spdk/iscsi_spec.h 00:08:28.213 TEST_HEADER include/spdk/json.h 00:08:28.213 TEST_HEADER include/spdk/jsonrpc.h 00:08:28.213 TEST_HEADER include/spdk/keyring.h 00:08:28.213 TEST_HEADER include/spdk/keyring_module.h 00:08:28.213 TEST_HEADER include/spdk/likely.h 00:08:28.213 TEST_HEADER include/spdk/log.h 00:08:28.213 TEST_HEADER include/spdk/lvol.h 00:08:28.213 TEST_HEADER include/spdk/md5.h 00:08:28.213 TEST_HEADER include/spdk/memory.h 00:08:28.213 TEST_HEADER include/spdk/mmio.h 00:08:28.213 TEST_HEADER include/spdk/nbd.h 00:08:28.213 TEST_HEADER include/spdk/net.h 00:08:28.213 TEST_HEADER include/spdk/notify.h 00:08:28.213 TEST_HEADER include/spdk/nvme.h 00:08:28.213 TEST_HEADER include/spdk/nvme_intel.h 00:08:28.213 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:28.213 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:28.213 TEST_HEADER include/spdk/nvme_spec.h 00:08:28.213 TEST_HEADER include/spdk/nvme_zns.h 00:08:28.213 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:28.213 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:28.213 TEST_HEADER include/spdk/nvmf.h 00:08:28.213 TEST_HEADER include/spdk/nvmf_spec.h 00:08:28.213 TEST_HEADER include/spdk/nvmf_transport.h 00:08:28.213 TEST_HEADER include/spdk/opal.h 00:08:28.213 TEST_HEADER include/spdk/opal_spec.h 00:08:28.213 TEST_HEADER include/spdk/pci_ids.h 00:08:28.213 TEST_HEADER include/spdk/pipe.h 00:08:28.213 CC test/dma/test_dma/test_dma.o 00:08:28.213 TEST_HEADER include/spdk/queue.h 00:08:28.213 CC test/app/bdev_svc/bdev_svc.o 00:08:28.213 TEST_HEADER include/spdk/reduce.h 00:08:28.213 TEST_HEADER include/spdk/rpc.h 00:08:28.213 TEST_HEADER include/spdk/scheduler.h 00:08:28.213 TEST_HEADER include/spdk/scsi.h 00:08:28.213 CC test/env/mem_callbacks/mem_callbacks.o 00:08:28.213 TEST_HEADER include/spdk/scsi_spec.h 00:08:28.213 TEST_HEADER include/spdk/sock.h 00:08:28.213 TEST_HEADER include/spdk/stdinc.h 00:08:28.213 TEST_HEADER include/spdk/string.h 00:08:28.213 TEST_HEADER include/spdk/thread.h 00:08:28.213 TEST_HEADER include/spdk/trace.h 00:08:28.213 TEST_HEADER include/spdk/trace_parser.h 00:08:28.213 TEST_HEADER include/spdk/tree.h 00:08:28.476 TEST_HEADER include/spdk/ublk.h 00:08:28.476 TEST_HEADER include/spdk/util.h 00:08:28.476 TEST_HEADER include/spdk/uuid.h 00:08:28.476 TEST_HEADER include/spdk/version.h 00:08:28.476 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:28.476 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:28.476 TEST_HEADER include/spdk/vhost.h 00:08:28.476 TEST_HEADER include/spdk/vmd.h 00:08:28.476 TEST_HEADER include/spdk/xor.h 00:08:28.476 TEST_HEADER include/spdk/zipf.h 00:08:28.476 CXX test/cpp_headers/accel.o 00:08:28.476 LINK rpc_client_test 00:08:28.476 LINK zipf 00:08:28.476 LINK poller_perf 00:08:28.476 LINK interrupt_tgt 00:08:28.476 LINK ioat_perf 00:08:28.476 CXX test/cpp_headers/accel_module.o 00:08:28.476 CXX test/cpp_headers/assert.o 00:08:28.476 LINK bdev_svc 00:08:28.476 CXX test/cpp_headers/barrier.o 00:08:28.745 LINK spdk_trace 00:08:28.745 CC app/trace_record/trace_record.o 00:08:28.745 CC examples/ioat/verify/verify.o 00:08:28.745 CXX test/cpp_headers/base64.o 00:08:29.003 CC app/nvmf_tgt/nvmf_main.o 00:08:29.003 CC examples/thread/thread/thread_ex.o 00:08:29.003 LINK test_dma 00:08:29.003 CC examples/sock/hello_world/hello_sock.o 00:08:29.003 CXX test/cpp_headers/bdev.o 00:08:29.003 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:29.003 LINK mem_callbacks 00:08:29.003 CC test/app/histogram_perf/histogram_perf.o 00:08:29.003 LINK verify 00:08:29.262 LINK spdk_trace_record 00:08:29.262 LINK nvmf_tgt 00:08:29.262 LINK histogram_perf 00:08:29.262 CXX test/cpp_headers/bdev_module.o 00:08:29.262 CC test/env/vtophys/vtophys.o 00:08:29.262 LINK thread 00:08:29.520 CXX test/cpp_headers/bdev_zone.o 00:08:29.520 LINK hello_sock 00:08:29.520 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:29.520 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:29.520 CXX test/cpp_headers/bit_array.o 00:08:29.520 LINK vtophys 00:08:29.779 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:29.779 CC app/iscsi_tgt/iscsi_tgt.o 00:08:29.779 LINK nvme_fuzz 00:08:29.779 CC app/spdk_tgt/spdk_tgt.o 00:08:29.779 CXX test/cpp_headers/bit_pool.o 00:08:29.779 CC test/event/event_perf/event_perf.o 00:08:29.779 CC test/event/reactor/reactor.o 00:08:29.779 CC examples/vmd/lsvmd/lsvmd.o 00:08:29.779 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:30.038 LINK iscsi_tgt 00:08:30.038 LINK event_perf 00:08:30.038 CC test/env/memory/memory_ut.o 00:08:30.038 LINK spdk_tgt 00:08:30.038 CXX test/cpp_headers/blob_bdev.o 00:08:30.038 LINK reactor 00:08:30.038 LINK lsvmd 00:08:30.297 LINK env_dpdk_post_init 00:08:30.297 LINK vhost_fuzz 00:08:30.297 CC test/env/pci/pci_ut.o 00:08:30.297 CC test/event/reactor_perf/reactor_perf.o 00:08:30.297 CXX test/cpp_headers/blobfs_bdev.o 00:08:30.297 CXX test/cpp_headers/blobfs.o 00:08:30.557 CC examples/vmd/led/led.o 00:08:30.557 CXX test/cpp_headers/blob.o 00:08:30.557 LINK reactor_perf 00:08:30.557 CXX test/cpp_headers/conf.o 00:08:30.557 LINK led 00:08:30.557 CC test/event/app_repeat/app_repeat.o 00:08:30.557 CC app/spdk_lspci/spdk_lspci.o 00:08:30.557 CXX test/cpp_headers/config.o 00:08:30.815 LINK pci_ut 00:08:30.815 LINK app_repeat 00:08:30.815 CXX test/cpp_headers/cpuset.o 00:08:31.074 CC test/nvme/aer/aer.o 00:08:31.074 LINK spdk_lspci 00:08:31.074 CXX test/cpp_headers/crc16.o 00:08:31.074 CC test/accel/dif/dif.o 00:08:31.074 CC test/blobfs/mkfs/mkfs.o 00:08:31.074 CC examples/idxd/perf/perf.o 00:08:31.333 CC test/app/jsoncat/jsoncat.o 00:08:31.333 CC test/event/scheduler/scheduler.o 00:08:31.333 LINK mkfs 00:08:31.333 CXX test/cpp_headers/crc32.o 00:08:31.592 CC app/spdk_nvme_perf/perf.o 00:08:31.592 LINK jsoncat 00:08:31.592 LINK aer 00:08:31.592 LINK memory_ut 00:08:31.592 LINK idxd_perf 00:08:31.850 CXX test/cpp_headers/crc64.o 00:08:31.851 LINK scheduler 00:08:31.851 CXX test/cpp_headers/dif.o 00:08:31.851 CC test/app/stub/stub.o 00:08:32.109 LINK dif 00:08:32.109 CC test/nvme/reset/reset.o 00:08:32.109 CC test/lvol/esnap/esnap.o 00:08:32.109 CC test/nvme/sgl/sgl.o 00:08:32.109 LINK iscsi_fuzz 00:08:32.109 CXX test/cpp_headers/dma.o 00:08:32.109 LINK stub 00:08:32.368 CXX test/cpp_headers/endian.o 00:08:32.368 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:32.368 CC test/nvme/e2edp/nvme_dp.o 00:08:32.368 LINK reset 00:08:32.368 LINK sgl 00:08:32.368 CC test/nvme/overhead/overhead.o 00:08:32.626 CXX test/cpp_headers/env_dpdk.o 00:08:32.626 CC test/nvme/err_injection/err_injection.o 00:08:32.626 CC test/bdev/bdevio/bdevio.o 00:08:32.626 CC test/nvme/startup/startup.o 00:08:32.626 LINK hello_fsdev 00:08:32.626 CXX test/cpp_headers/env.o 00:08:32.626 CC test/nvme/reserve/reserve.o 00:08:32.885 LINK nvme_dp 00:08:32.885 LINK spdk_nvme_perf 00:08:32.885 LINK overhead 00:08:32.885 LINK startup 00:08:32.885 LINK err_injection 00:08:32.885 LINK reserve 00:08:33.148 CXX test/cpp_headers/event.o 00:08:33.148 CC app/spdk_nvme_identify/identify.o 00:08:33.148 CC examples/accel/perf/accel_perf.o 00:08:33.148 CC test/nvme/simple_copy/simple_copy.o 00:08:33.148 LINK bdevio 00:08:33.148 CC app/spdk_nvme_discover/discovery_aer.o 00:08:33.148 CXX test/cpp_headers/fd_group.o 00:08:33.415 CC app/spdk_top/spdk_top.o 00:08:33.416 CC app/vhost/vhost.o 00:08:33.416 CC app/spdk_dd/spdk_dd.o 00:08:33.675 CXX test/cpp_headers/fd.o 00:08:33.675 LINK vhost 00:08:33.675 LINK spdk_nvme_discover 00:08:33.675 LINK simple_copy 00:08:33.948 CXX test/cpp_headers/file.o 00:08:33.948 LINK accel_perf 00:08:33.948 CC app/fio/nvme/fio_plugin.o 00:08:33.948 LINK spdk_dd 00:08:33.948 CXX test/cpp_headers/fsdev.o 00:08:33.948 CC app/fio/bdev/fio_plugin.o 00:08:34.207 CC test/nvme/connect_stress/connect_stress.o 00:08:34.464 LINK connect_stress 00:08:34.464 CXX test/cpp_headers/fsdev_module.o 00:08:34.464 LINK spdk_nvme_identify 00:08:34.464 CC examples/blob/cli/blobcli.o 00:08:34.464 CC examples/blob/hello_world/hello_blob.o 00:08:34.464 CC examples/nvme/hello_world/hello_world.o 00:08:34.464 CXX test/cpp_headers/ftl.o 00:08:34.723 CC test/nvme/boot_partition/boot_partition.o 00:08:34.723 CXX test/cpp_headers/fuse_dispatcher.o 00:08:34.980 LINK boot_partition 00:08:34.980 CC examples/bdev/hello_world/hello_bdev.o 00:08:34.980 LINK hello_blob 00:08:34.980 LINK hello_world 00:08:34.980 CXX test/cpp_headers/gpt_spec.o 00:08:34.980 LINK spdk_nvme 00:08:34.980 LINK spdk_bdev 00:08:35.237 CXX test/cpp_headers/hexlify.o 00:08:35.237 LINK spdk_top 00:08:35.237 CC test/nvme/compliance/nvme_compliance.o 00:08:35.237 CC test/nvme/fused_ordering/fused_ordering.o 00:08:35.237 CC examples/nvme/reconnect/reconnect.o 00:08:35.237 LINK blobcli 00:08:35.237 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:35.495 CC test/nvme/fdp/fdp.o 00:08:35.495 LINK hello_bdev 00:08:35.495 CXX test/cpp_headers/histogram_data.o 00:08:35.495 CC examples/bdev/bdevperf/bdevperf.o 00:08:35.495 LINK doorbell_aers 00:08:35.495 CXX test/cpp_headers/idxd.o 00:08:35.495 LINK fused_ordering 00:08:35.753 LINK nvme_compliance 00:08:35.753 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:35.753 CC test/nvme/cuse/cuse.o 00:08:35.753 LINK reconnect 00:08:35.753 CXX test/cpp_headers/idxd_spec.o 00:08:35.753 LINK fdp 00:08:35.753 CC examples/nvme/arbitration/arbitration.o 00:08:36.010 CC examples/nvme/hotplug/hotplug.o 00:08:36.010 CXX test/cpp_headers/init.o 00:08:36.011 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:36.011 CXX test/cpp_headers/ioat.o 00:08:36.011 CXX test/cpp_headers/ioat_spec.o 00:08:36.288 CC examples/nvme/abort/abort.o 00:08:36.288 LINK hotplug 00:08:36.288 LINK cmb_copy 00:08:36.545 CXX test/cpp_headers/iscsi_spec.o 00:08:36.545 LINK nvme_manage 00:08:36.545 LINK arbitration 00:08:36.545 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:36.545 CXX test/cpp_headers/json.o 00:08:36.545 LINK abort 00:08:36.802 LINK bdevperf 00:08:36.802 CXX test/cpp_headers/jsonrpc.o 00:08:36.802 CXX test/cpp_headers/keyring.o 00:08:36.802 CXX test/cpp_headers/keyring_module.o 00:08:36.802 CXX test/cpp_headers/likely.o 00:08:36.802 LINK pmr_persistence 00:08:36.802 CXX test/cpp_headers/log.o 00:08:36.802 CXX test/cpp_headers/lvol.o 00:08:36.802 CXX test/cpp_headers/md5.o 00:08:37.060 CXX test/cpp_headers/memory.o 00:08:37.060 CXX test/cpp_headers/mmio.o 00:08:37.060 CXX test/cpp_headers/nbd.o 00:08:37.060 CXX test/cpp_headers/net.o 00:08:37.060 CXX test/cpp_headers/notify.o 00:08:37.060 CXX test/cpp_headers/nvme.o 00:08:37.060 CXX test/cpp_headers/nvme_intel.o 00:08:37.060 CXX test/cpp_headers/nvme_ocssd.o 00:08:37.060 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:37.317 CXX test/cpp_headers/nvme_spec.o 00:08:37.317 CXX test/cpp_headers/nvme_zns.o 00:08:37.317 CXX test/cpp_headers/nvmf_cmd.o 00:08:37.317 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:37.317 CXX test/cpp_headers/nvmf.o 00:08:37.317 CXX test/cpp_headers/nvmf_spec.o 00:08:37.317 CC examples/nvmf/nvmf/nvmf.o 00:08:37.575 CXX test/cpp_headers/nvmf_transport.o 00:08:37.575 CXX test/cpp_headers/opal.o 00:08:37.575 CXX test/cpp_headers/opal_spec.o 00:08:37.575 CXX test/cpp_headers/pci_ids.o 00:08:37.575 LINK cuse 00:08:37.575 CXX test/cpp_headers/pipe.o 00:08:37.575 CXX test/cpp_headers/queue.o 00:08:37.575 CXX test/cpp_headers/reduce.o 00:08:37.575 CXX test/cpp_headers/rpc.o 00:08:37.832 CXX test/cpp_headers/scheduler.o 00:08:37.832 CXX test/cpp_headers/scsi.o 00:08:37.832 CXX test/cpp_headers/scsi_spec.o 00:08:37.832 CXX test/cpp_headers/sock.o 00:08:37.832 CXX test/cpp_headers/stdinc.o 00:08:37.832 CXX test/cpp_headers/string.o 00:08:37.832 CXX test/cpp_headers/thread.o 00:08:37.832 CXX test/cpp_headers/trace.o 00:08:37.832 LINK nvmf 00:08:37.832 CXX test/cpp_headers/trace_parser.o 00:08:37.832 CXX test/cpp_headers/tree.o 00:08:37.832 CXX test/cpp_headers/ublk.o 00:08:37.832 CXX test/cpp_headers/util.o 00:08:38.090 CXX test/cpp_headers/uuid.o 00:08:38.091 CXX test/cpp_headers/version.o 00:08:38.091 CXX test/cpp_headers/vfio_user_pci.o 00:08:38.091 CXX test/cpp_headers/vfio_user_spec.o 00:08:38.091 CXX test/cpp_headers/vhost.o 00:08:38.091 CXX test/cpp_headers/vmd.o 00:08:38.091 CXX test/cpp_headers/xor.o 00:08:38.091 CXX test/cpp_headers/zipf.o 00:08:39.990 LINK esnap 00:08:40.249 00:08:40.249 real 1m50.965s 00:08:40.249 user 10m45.805s 00:08:40.249 sys 2m2.840s 00:08:40.249 13:03:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:40.249 ************************************ 00:08:40.249 END TEST make 00:08:40.249 ************************************ 00:08:40.249 13:03:27 make -- common/autotest_common.sh@10 -- $ set +x 00:08:40.249 13:03:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:40.249 13:03:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:40.249 13:03:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:40.249 13:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.249 13:03:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:40.249 13:03:27 -- pm/common@44 -- $ pid=5336 00:08:40.249 13:03:27 -- pm/common@50 -- $ kill -TERM 5336 00:08:40.249 13:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.249 13:03:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:40.249 13:03:27 -- pm/common@44 -- $ pid=5338 00:08:40.249 13:03:27 -- pm/common@50 -- $ kill -TERM 5338 00:08:40.249 13:03:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:40.249 13:03:27 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:40.509 13:03:27 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.509 13:03:27 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.509 13:03:27 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.509 13:03:27 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.509 13:03:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.509 13:03:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.509 13:03:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.509 13:03:27 -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.509 13:03:27 -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.509 13:03:27 -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.509 13:03:27 -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.509 13:03:27 -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.509 13:03:27 -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.509 13:03:27 -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.509 13:03:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.509 13:03:27 -- scripts/common.sh@344 -- # case "$op" in 00:08:40.509 13:03:27 -- scripts/common.sh@345 -- # : 1 00:08:40.509 13:03:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.509 13:03:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.509 13:03:27 -- scripts/common.sh@365 -- # decimal 1 00:08:40.509 13:03:27 -- scripts/common.sh@353 -- # local d=1 00:08:40.509 13:03:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.509 13:03:27 -- scripts/common.sh@355 -- # echo 1 00:08:40.509 13:03:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.509 13:03:27 -- scripts/common.sh@366 -- # decimal 2 00:08:40.509 13:03:27 -- scripts/common.sh@353 -- # local d=2 00:08:40.509 13:03:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.509 13:03:27 -- scripts/common.sh@355 -- # echo 2 00:08:40.509 13:03:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.509 13:03:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.509 13:03:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.509 13:03:27 -- scripts/common.sh@368 -- # return 0 00:08:40.509 13:03:27 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.509 13:03:27 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.509 --rc genhtml_branch_coverage=1 00:08:40.509 --rc genhtml_function_coverage=1 00:08:40.509 --rc genhtml_legend=1 00:08:40.509 --rc geninfo_all_blocks=1 00:08:40.509 --rc geninfo_unexecuted_blocks=1 00:08:40.509 00:08:40.509 ' 00:08:40.509 13:03:27 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.509 --rc genhtml_branch_coverage=1 00:08:40.509 --rc genhtml_function_coverage=1 00:08:40.509 --rc genhtml_legend=1 00:08:40.509 --rc geninfo_all_blocks=1 00:08:40.509 --rc geninfo_unexecuted_blocks=1 00:08:40.509 00:08:40.509 ' 00:08:40.509 13:03:27 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.509 --rc genhtml_branch_coverage=1 00:08:40.509 --rc genhtml_function_coverage=1 00:08:40.509 --rc genhtml_legend=1 00:08:40.509 --rc geninfo_all_blocks=1 00:08:40.509 --rc geninfo_unexecuted_blocks=1 00:08:40.509 00:08:40.509 ' 00:08:40.509 13:03:27 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.509 --rc genhtml_branch_coverage=1 00:08:40.509 --rc genhtml_function_coverage=1 00:08:40.509 --rc genhtml_legend=1 00:08:40.509 --rc geninfo_all_blocks=1 00:08:40.509 --rc geninfo_unexecuted_blocks=1 00:08:40.509 00:08:40.509 ' 00:08:40.509 13:03:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.509 13:03:27 -- nvmf/common.sh@7 -- # uname -s 00:08:40.509 13:03:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.509 13:03:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.509 13:03:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.509 13:03:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.509 13:03:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.509 13:03:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.509 13:03:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.509 13:03:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.509 13:03:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.509 13:03:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.509 13:03:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fc37e1a9-b301-4ee2-b448-5efe352245f6 00:08:40.509 13:03:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=fc37e1a9-b301-4ee2-b448-5efe352245f6 00:08:40.509 13:03:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.509 13:03:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.509 13:03:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:40.509 13:03:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.509 13:03:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.509 13:03:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.509 13:03:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.509 13:03:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.509 13:03:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.509 13:03:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.509 13:03:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.509 13:03:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.509 13:03:27 -- paths/export.sh@5 -- # export PATH 00:08:40.509 13:03:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.509 13:03:27 -- nvmf/common.sh@51 -- # : 0 00:08:40.509 13:03:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.509 13:03:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.509 13:03:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.509 13:03:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.509 13:03:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.509 13:03:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.510 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.510 13:03:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.510 13:03:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.510 13:03:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.510 13:03:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:40.510 13:03:27 -- spdk/autotest.sh@32 -- # uname -s 00:08:40.510 13:03:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:40.510 13:03:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:40.510 13:03:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:40.510 13:03:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:40.510 13:03:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:40.510 13:03:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:40.510 13:03:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:40.768 13:03:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:40.768 13:03:27 -- spdk/autotest.sh@48 -- # udevadm_pid=55032 00:08:40.768 13:03:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:40.768 13:03:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:40.768 13:03:27 -- pm/common@17 -- # local monitor 00:08:40.768 13:03:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.768 13:03:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.768 13:03:27 -- pm/common@21 -- # date +%s 00:08:40.768 13:03:27 -- pm/common@25 -- # sleep 1 00:08:40.768 13:03:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490207 00:08:40.768 13:03:27 -- pm/common@21 -- # date +%s 00:08:40.768 13:03:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490207 00:08:40.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490207_collect-cpu-load.pm.log 00:08:40.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490207_collect-vmstat.pm.log 00:08:41.704 13:03:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:41.704 13:03:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:41.704 13:03:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.704 13:03:28 -- common/autotest_common.sh@10 -- # set +x 00:08:41.704 13:03:28 -- spdk/autotest.sh@59 -- # create_test_list 00:08:41.704 13:03:28 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:41.704 13:03:28 -- common/autotest_common.sh@10 -- # set +x 00:08:41.704 13:03:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:41.704 13:03:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:41.704 13:03:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:41.704 13:03:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:41.704 13:03:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:41.704 13:03:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:41.704 13:03:28 -- common/autotest_common.sh@1457 -- # uname 00:08:41.704 13:03:28 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:41.704 13:03:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:41.704 13:03:28 -- common/autotest_common.sh@1477 -- # uname 00:08:41.704 13:03:28 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:41.704 13:03:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:41.704 13:03:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:41.704 lcov: LCOV version 1.15 00:08:41.704 13:03:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:59.799 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:59.799 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:14.691 13:04:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:14.691 13:04:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.691 13:04:00 -- common/autotest_common.sh@10 -- # set +x 00:09:14.691 13:04:00 -- spdk/autotest.sh@78 -- # rm -f 00:09:14.691 13:04:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:14.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.949 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:14.949 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:14.949 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:09:15.207 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:09:15.207 13:04:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:15.207 13:04:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:15.207 13:04:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:15.207 13:04:01 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:15.207 13:04:01 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:15.207 13:04:01 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:15.207 13:04:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:09:15.207 13:04:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:15.207 13:04:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:15.207 13:04:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:09:15.207 13:04:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:09:15.207 13:04:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:15.207 13:04:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:09:15.207 13:04:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:09:15.207 13:04:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:09:15.207 13:04:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:09:15.207 13:04:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:09:15.207 13:04:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:15.207 13:04:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:15.207 13:04:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:09:15.207 13:04:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:09:15.207 13:04:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:15.207 13:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:15.207 13:04:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:15.207 13:04:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:09:15.207 13:04:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:15.207 13:04:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:09:15.207 13:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:09:15.207 13:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:15.207 13:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:15.207 13:04:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:15.207 13:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:15.207 13:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:15.207 13:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:15.207 13:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:15.207 13:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:15.207 No valid GPT data, bailing 00:09:15.207 13:04:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:15.207 13:04:02 -- scripts/common.sh@394 -- # pt= 00:09:15.207 13:04:02 -- scripts/common.sh@395 -- # return 1 00:09:15.207 13:04:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:15.207 1+0 records in 00:09:15.207 1+0 records out 00:09:15.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128202 s, 81.8 MB/s 00:09:15.207 13:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:15.207 13:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:15.207 13:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:15.207 13:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:15.207 13:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:15.207 No valid GPT data, bailing 00:09:15.207 13:04:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:15.207 13:04:02 -- scripts/common.sh@394 -- # pt= 00:09:15.207 13:04:02 -- scripts/common.sh@395 -- # return 1 00:09:15.207 13:04:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:15.207 1+0 records in 00:09:15.207 1+0 records out 00:09:15.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519313 s, 202 MB/s 00:09:15.207 13:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:15.207 13:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:15.207 13:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:09:15.207 13:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:09:15.207 13:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:09:15.466 No valid GPT data, bailing 00:09:15.466 13:04:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:09:15.466 13:04:02 -- scripts/common.sh@394 -- # pt= 00:09:15.466 13:04:02 -- scripts/common.sh@395 -- # return 1 00:09:15.466 13:04:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:09:15.466 1+0 records in 00:09:15.466 1+0 records out 00:09:15.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501638 s, 209 MB/s 00:09:15.466 13:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:15.466 13:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:15.466 13:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:09:15.466 13:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:09:15.466 13:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:09:15.466 No valid GPT data, bailing 00:09:15.466 13:04:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:09:15.466 13:04:02 -- scripts/common.sh@394 -- # pt= 00:09:15.466 13:04:02 -- scripts/common.sh@395 -- # return 1 00:09:15.466 13:04:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:09:15.466 1+0 records in 00:09:15.466 1+0 records out 00:09:15.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472688 s, 222 MB/s 00:09:15.466 13:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:15.466 13:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:15.466 13:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:09:15.466 13:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:09:15.466 13:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:09:15.466 No valid GPT data, bailing 00:09:15.466 13:04:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:09:15.466 13:04:02 -- scripts/common.sh@394 -- # pt= 00:09:15.466 13:04:02 -- scripts/common.sh@395 -- # return 1 00:09:15.466 13:04:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:09:15.466 1+0 records in 00:09:15.466 1+0 records out 00:09:15.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514515 s, 204 MB/s 00:09:15.466 13:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:15.466 13:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:15.466 13:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:09:15.466 13:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:09:15.466 13:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:09:15.724 No valid GPT data, bailing 00:09:15.724 13:04:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:09:15.724 13:04:02 -- scripts/common.sh@394 -- # pt= 00:09:15.724 13:04:02 -- scripts/common.sh@395 -- # return 1 00:09:15.724 13:04:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:09:15.724 1+0 records in 00:09:15.724 1+0 records out 00:09:15.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361459 s, 290 MB/s 00:09:15.724 13:04:02 -- spdk/autotest.sh@105 -- # sync 00:09:15.724 13:04:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:15.724 13:04:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:15.724 13:04:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:18.278 13:04:04 -- spdk/autotest.sh@111 -- # uname -s 00:09:18.278 13:04:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:18.278 13:04:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:18.278 13:04:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:18.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.844 Hugepages 00:09:18.844 node hugesize free / total 00:09:18.844 node0 1048576kB 0 / 0 00:09:18.844 node0 2048kB 0 / 0 00:09:18.844 00:09:18.844 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:18.844 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:18.844 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:19.102 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:09:19.102 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:09:19.102 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:09:19.102 13:04:06 -- spdk/autotest.sh@117 -- # uname -s 00:09:19.102 13:04:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:19.102 13:04:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:19.102 13:04:06 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:19.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.603 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.603 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.603 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.603 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.603 13:04:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:21.536 13:04:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:21.536 13:04:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:21.536 13:04:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.536 13:04:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:21.536 13:04:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:21.536 13:04:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:21.536 13:04:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.536 13:04:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:21.536 13:04:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.536 13:04:08 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:21.536 13:04:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.536 13:04:08 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.132 Waiting for block devices as requested 00:09:22.132 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.407 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.407 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.407 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.671 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:27.671 13:04:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:27.671 13:04:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:27.671 13:04:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:27.671 13:04:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:27.671 13:04:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:27.671 13:04:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1543 -- # continue 00:09:27.671 13:04:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:27.671 13:04:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:27.671 13:04:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:27.671 13:04:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1543 -- # continue 00:09:27.671 13:04:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:27.671 13:04:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:27.671 13:04:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:27.671 13:04:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1543 -- # continue 00:09:27.671 13:04:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:09:27.671 13:04:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:27.671 13:04:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:27.671 13:04:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:27.671 13:04:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:27.671 13:04:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:09:27.672 13:04:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:27.672 13:04:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:27.672 13:04:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:27.672 13:04:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:27.672 13:04:14 -- common/autotest_common.sh@1543 -- # continue 00:09:27.672 13:04:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:27.672 13:04:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.672 13:04:14 -- common/autotest_common.sh@10 -- # set +x 00:09:27.672 13:04:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:27.672 13:04:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.672 13:04:14 -- common/autotest_common.sh@10 -- # set +x 00:09:27.672 13:04:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.799 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.799 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.799 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.799 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.059 13:04:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:29.059 13:04:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.059 13:04:15 -- common/autotest_common.sh@10 -- # set +x 00:09:29.059 13:04:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:29.059 13:04:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:29.059 13:04:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:29.059 13:04:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:29.059 13:04:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:29.059 13:04:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:29.059 13:04:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:29.059 13:04:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:29.059 13:04:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:29.059 13:04:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:29.059 13:04:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:29.059 13:04:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:29.059 13:04:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:29.059 13:04:15 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:29.059 13:04:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:29.059 13:04:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:29.059 13:04:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:29.059 13:04:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:29.059 13:04:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:29.059 13:04:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:29.059 13:04:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:29.059 13:04:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:09:29.059 13:04:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:29.059 13:04:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:29.059 13:04:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:29.059 13:04:15 -- common/autotest_common.sh@1572 -- # return 0 00:09:29.059 13:04:16 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:29.059 13:04:16 -- common/autotest_common.sh@1580 -- # return 0 00:09:29.059 13:04:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:29.059 13:04:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:29.059 13:04:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:29.059 13:04:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:29.059 13:04:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:29.059 13:04:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.060 13:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:29.060 13:04:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:29.060 13:04:16 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:29.060 13:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.060 13:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.060 13:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:29.060 ************************************ 00:09:29.060 START TEST env 00:09:29.060 ************************************ 00:09:29.060 13:04:16 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:29.318 * Looking for test storage... 00:09:29.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.318 13:04:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.318 13:04:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.318 13:04:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.318 13:04:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.318 13:04:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.318 13:04:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.318 13:04:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.318 13:04:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.318 13:04:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.318 13:04:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.318 13:04:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.318 13:04:16 env -- scripts/common.sh@344 -- # case "$op" in 00:09:29.318 13:04:16 env -- scripts/common.sh@345 -- # : 1 00:09:29.318 13:04:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.318 13:04:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.318 13:04:16 env -- scripts/common.sh@365 -- # decimal 1 00:09:29.318 13:04:16 env -- scripts/common.sh@353 -- # local d=1 00:09:29.318 13:04:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.318 13:04:16 env -- scripts/common.sh@355 -- # echo 1 00:09:29.318 13:04:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.318 13:04:16 env -- scripts/common.sh@366 -- # decimal 2 00:09:29.318 13:04:16 env -- scripts/common.sh@353 -- # local d=2 00:09:29.318 13:04:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.318 13:04:16 env -- scripts/common.sh@355 -- # echo 2 00:09:29.318 13:04:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.318 13:04:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.318 13:04:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.318 13:04:16 env -- scripts/common.sh@368 -- # return 0 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.318 --rc genhtml_branch_coverage=1 00:09:29.318 --rc genhtml_function_coverage=1 00:09:29.318 --rc genhtml_legend=1 00:09:29.318 --rc geninfo_all_blocks=1 00:09:29.318 --rc geninfo_unexecuted_blocks=1 00:09:29.318 00:09:29.318 ' 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.318 --rc genhtml_branch_coverage=1 00:09:29.318 --rc genhtml_function_coverage=1 00:09:29.318 --rc genhtml_legend=1 00:09:29.318 --rc geninfo_all_blocks=1 00:09:29.318 --rc geninfo_unexecuted_blocks=1 00:09:29.318 00:09:29.318 ' 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.318 --rc genhtml_branch_coverage=1 00:09:29.318 --rc genhtml_function_coverage=1 00:09:29.318 --rc genhtml_legend=1 00:09:29.318 --rc geninfo_all_blocks=1 00:09:29.318 --rc geninfo_unexecuted_blocks=1 00:09:29.318 00:09:29.318 ' 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.318 --rc genhtml_branch_coverage=1 00:09:29.318 --rc genhtml_function_coverage=1 00:09:29.318 --rc genhtml_legend=1 00:09:29.318 --rc geninfo_all_blocks=1 00:09:29.318 --rc geninfo_unexecuted_blocks=1 00:09:29.318 00:09:29.318 ' 00:09:29.318 13:04:16 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.318 13:04:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.318 13:04:16 env -- common/autotest_common.sh@10 -- # set +x 00:09:29.318 ************************************ 00:09:29.318 START TEST env_memory 00:09:29.318 ************************************ 00:09:29.318 13:04:16 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:29.318 00:09:29.318 00:09:29.318 CUnit - A unit testing framework for C - Version 2.1-3 00:09:29.318 http://cunit.sourceforge.net/ 00:09:29.318 00:09:29.318 00:09:29.318 Suite: memory 00:09:29.318 Test: alloc and free memory map ...[2024-12-06 13:04:16.309241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:29.577 passed 00:09:29.577 Test: mem map translation ...[2024-12-06 13:04:16.379210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:29.577 [2024-12-06 13:04:16.379326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:29.577 [2024-12-06 13:04:16.379422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:29.577 [2024-12-06 13:04:16.379458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:29.577 passed 00:09:29.577 Test: mem map registration ...[2024-12-06 13:04:16.478398] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:29.577 [2024-12-06 13:04:16.478601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:29.577 passed 00:09:29.836 Test: mem map adjacent registrations ...passed 00:09:29.836 00:09:29.836 Run Summary: Type Total Ran Passed Failed Inactive 00:09:29.836 suites 1 1 n/a 0 0 00:09:29.836 tests 4 4 4 0 0 00:09:29.836 asserts 152 152 152 0 n/a 00:09:29.836 00:09:29.836 Elapsed time = 0.344 seconds 00:09:29.836 00:09:29.836 real 0m0.391s 00:09:29.836 user 0m0.353s 00:09:29.836 sys 0m0.029s 00:09:29.836 13:04:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.836 13:04:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:29.836 ************************************ 00:09:29.836 END TEST env_memory 00:09:29.836 ************************************ 00:09:29.836 13:04:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:29.836 13:04:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.836 13:04:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.836 13:04:16 env -- common/autotest_common.sh@10 -- # set +x 00:09:29.836 ************************************ 00:09:29.836 START TEST env_vtophys 00:09:29.836 ************************************ 00:09:29.836 13:04:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:29.836 EAL: lib.eal log level changed from notice to debug 00:09:29.836 EAL: Detected lcore 0 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 1 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 2 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 3 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 4 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 5 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 6 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 7 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 8 as core 0 on socket 0 00:09:29.836 EAL: Detected lcore 9 as core 0 on socket 0 00:09:29.836 EAL: Maximum logical cores by configuration: 128 00:09:29.836 EAL: Detected CPU lcores: 10 00:09:29.836 EAL: Detected NUMA nodes: 1 00:09:29.836 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:29.836 EAL: Detected shared linkage of DPDK 00:09:29.836 EAL: No shared files mode enabled, IPC will be disabled 00:09:29.836 EAL: Selected IOVA mode 'PA' 00:09:29.836 EAL: Probing VFIO support... 00:09:29.836 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:29.836 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:29.836 EAL: Ask a virtual area of 0x2e000 bytes 00:09:29.836 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:29.836 EAL: Setting up physically contiguous memory... 00:09:29.836 EAL: Setting maximum number of open files to 524288 00:09:29.836 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:29.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:29.836 EAL: Ask a virtual area of 0x61000 bytes 00:09:29.836 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:29.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:29.836 EAL: Ask a virtual area of 0x400000000 bytes 00:09:29.836 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:29.836 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:29.836 EAL: Ask a virtual area of 0x61000 bytes 00:09:29.836 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:29.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:29.836 EAL: Ask a virtual area of 0x400000000 bytes 00:09:29.836 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:29.836 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:29.836 EAL: Ask a virtual area of 0x61000 bytes 00:09:29.836 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:29.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:29.836 EAL: Ask a virtual area of 0x400000000 bytes 00:09:29.836 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:29.836 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:29.836 EAL: Ask a virtual area of 0x61000 bytes 00:09:29.836 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:29.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:29.836 EAL: Ask a virtual area of 0x400000000 bytes 00:09:29.836 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:29.836 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:29.836 EAL: Hugepages will be freed exactly as allocated. 00:09:29.836 EAL: No shared files mode enabled, IPC is disabled 00:09:29.836 EAL: No shared files mode enabled, IPC is disabled 00:09:30.094 EAL: TSC frequency is ~2200000 KHz 00:09:30.094 EAL: Main lcore 0 is ready (tid=7efc992b4a40;cpuset=[0]) 00:09:30.094 EAL: Trying to obtain current memory policy. 00:09:30.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.094 EAL: Restoring previous memory policy: 0 00:09:30.094 EAL: request: mp_malloc_sync 00:09:30.094 EAL: No shared files mode enabled, IPC is disabled 00:09:30.094 EAL: Heap on socket 0 was expanded by 2MB 00:09:30.094 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:30.094 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:30.094 EAL: Mem event callback 'spdk:(nil)' registered 00:09:30.094 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:30.094 00:09:30.094 00:09:30.094 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.094 http://cunit.sourceforge.net/ 00:09:30.094 00:09:30.094 00:09:30.094 Suite: components_suite 00:09:30.661 Test: vtophys_malloc_test ...passed 00:09:30.661 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:30.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.661 EAL: Restoring previous memory policy: 4 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was expanded by 4MB 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was shrunk by 4MB 00:09:30.661 EAL: Trying to obtain current memory policy. 00:09:30.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.661 EAL: Restoring previous memory policy: 4 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was expanded by 6MB 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was shrunk by 6MB 00:09:30.661 EAL: Trying to obtain current memory policy. 00:09:30.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.661 EAL: Restoring previous memory policy: 4 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was expanded by 10MB 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was shrunk by 10MB 00:09:30.661 EAL: Trying to obtain current memory policy. 00:09:30.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.661 EAL: Restoring previous memory policy: 4 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was expanded by 18MB 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was shrunk by 18MB 00:09:30.661 EAL: Trying to obtain current memory policy. 00:09:30.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.661 EAL: Restoring previous memory policy: 4 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was expanded by 34MB 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was shrunk by 34MB 00:09:30.661 EAL: Trying to obtain current memory policy. 00:09:30.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.661 EAL: Restoring previous memory policy: 4 00:09:30.661 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.661 EAL: request: mp_malloc_sync 00:09:30.661 EAL: No shared files mode enabled, IPC is disabled 00:09:30.661 EAL: Heap on socket 0 was expanded by 66MB 00:09:30.922 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.923 EAL: request: mp_malloc_sync 00:09:30.923 EAL: No shared files mode enabled, IPC is disabled 00:09:30.923 EAL: Heap on socket 0 was shrunk by 66MB 00:09:30.923 EAL: Trying to obtain current memory policy. 00:09:30.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.923 EAL: Restoring previous memory policy: 4 00:09:30.923 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.923 EAL: request: mp_malloc_sync 00:09:30.923 EAL: No shared files mode enabled, IPC is disabled 00:09:30.923 EAL: Heap on socket 0 was expanded by 130MB 00:09:31.181 EAL: Calling mem event callback 'spdk:(nil)' 00:09:31.181 EAL: request: mp_malloc_sync 00:09:31.181 EAL: No shared files mode enabled, IPC is disabled 00:09:31.181 EAL: Heap on socket 0 was shrunk by 130MB 00:09:31.438 EAL: Trying to obtain current memory policy. 00:09:31.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:31.438 EAL: Restoring previous memory policy: 4 00:09:31.438 EAL: Calling mem event callback 'spdk:(nil)' 00:09:31.438 EAL: request: mp_malloc_sync 00:09:31.438 EAL: No shared files mode enabled, IPC is disabled 00:09:31.438 EAL: Heap on socket 0 was expanded by 258MB 00:09:32.004 EAL: Calling mem event callback 'spdk:(nil)' 00:09:32.004 EAL: request: mp_malloc_sync 00:09:32.004 EAL: No shared files mode enabled, IPC is disabled 00:09:32.004 EAL: Heap on socket 0 was shrunk by 258MB 00:09:32.263 EAL: Trying to obtain current memory policy. 00:09:32.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:32.521 EAL: Restoring previous memory policy: 4 00:09:32.521 EAL: Calling mem event callback 'spdk:(nil)' 00:09:32.521 EAL: request: mp_malloc_sync 00:09:32.521 EAL: No shared files mode enabled, IPC is disabled 00:09:32.521 EAL: Heap on socket 0 was expanded by 514MB 00:09:33.459 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.459 EAL: request: mp_malloc_sync 00:09:33.459 EAL: No shared files mode enabled, IPC is disabled 00:09:33.459 EAL: Heap on socket 0 was shrunk by 514MB 00:09:34.393 EAL: Trying to obtain current memory policy. 00:09:34.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:34.718 EAL: Restoring previous memory policy: 4 00:09:34.718 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.718 EAL: request: mp_malloc_sync 00:09:34.718 EAL: No shared files mode enabled, IPC is disabled 00:09:34.718 EAL: Heap on socket 0 was expanded by 1026MB 00:09:36.109 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.366 EAL: request: mp_malloc_sync 00:09:36.366 EAL: No shared files mode enabled, IPC is disabled 00:09:36.366 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:37.739 passed 00:09:37.739 00:09:37.739 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.739 suites 1 1 n/a 0 0 00:09:37.739 tests 2 2 2 0 0 00:09:37.739 asserts 5705 5705 5705 0 n/a 00:09:37.739 00:09:37.739 Elapsed time = 7.710 seconds 00:09:37.739 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.739 EAL: request: mp_malloc_sync 00:09:37.739 EAL: No shared files mode enabled, IPC is disabled 00:09:37.739 EAL: Heap on socket 0 was shrunk by 2MB 00:09:37.739 EAL: No shared files mode enabled, IPC is disabled 00:09:37.739 EAL: No shared files mode enabled, IPC is disabled 00:09:37.739 EAL: No shared files mode enabled, IPC is disabled 00:09:37.739 00:09:37.739 real 0m8.044s 00:09:37.739 user 0m6.756s 00:09:37.739 sys 0m1.116s 00:09:37.739 13:04:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.739 13:04:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:37.739 ************************************ 00:09:37.739 END TEST env_vtophys 00:09:37.739 ************************************ 00:09:37.996 13:04:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:37.996 13:04:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.996 13:04:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.996 13:04:24 env -- common/autotest_common.sh@10 -- # set +x 00:09:37.996 ************************************ 00:09:37.996 START TEST env_pci 00:09:37.996 ************************************ 00:09:37.996 13:04:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:37.996 00:09:37.996 00:09:37.996 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.996 http://cunit.sourceforge.net/ 00:09:37.996 00:09:37.996 00:09:37.996 Suite: pci 00:09:37.996 Test: pci_hook ...[2024-12-06 13:04:24.813181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57862 has claimed it 00:09:37.996 passed 00:09:37.996 00:09:37.996 EAL: Cannot find device (10000:00:01.0) 00:09:37.996 EAL: Failed to attach device on primary process 00:09:37.996 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.996 suites 1 1 n/a 0 0 00:09:37.996 tests 1 1 1 0 0 00:09:37.996 asserts 25 25 25 0 n/a 00:09:37.996 00:09:37.996 Elapsed time = 0.009 seconds 00:09:37.996 00:09:37.996 real 0m0.093s 00:09:37.996 user 0m0.047s 00:09:37.996 sys 0m0.044s 00:09:37.996 13:04:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.996 ************************************ 00:09:37.996 END TEST env_pci 00:09:37.996 ************************************ 00:09:37.996 13:04:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:37.996 13:04:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:37.996 13:04:24 env -- env/env.sh@15 -- # uname 00:09:37.996 13:04:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:37.996 13:04:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:37.996 13:04:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:37.996 13:04:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.996 13:04:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.996 13:04:24 env -- common/autotest_common.sh@10 -- # set +x 00:09:37.996 ************************************ 00:09:37.996 START TEST env_dpdk_post_init 00:09:37.996 ************************************ 00:09:37.996 13:04:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:37.996 EAL: Detected CPU lcores: 10 00:09:37.996 EAL: Detected NUMA nodes: 1 00:09:37.996 EAL: Detected shared linkage of DPDK 00:09:38.260 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:38.260 EAL: Selected IOVA mode 'PA' 00:09:38.260 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:38.260 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:38.260 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:38.260 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:09:38.260 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:09:38.260 Starting DPDK initialization... 00:09:38.260 Starting SPDK post initialization... 00:09:38.260 SPDK NVMe probe 00:09:38.260 Attaching to 0000:00:10.0 00:09:38.260 Attaching to 0000:00:11.0 00:09:38.260 Attaching to 0000:00:12.0 00:09:38.260 Attaching to 0000:00:13.0 00:09:38.260 Attached to 0000:00:11.0 00:09:38.261 Attached to 0000:00:13.0 00:09:38.261 Attached to 0000:00:10.0 00:09:38.261 Attached to 0000:00:12.0 00:09:38.261 Cleaning up... 00:09:38.261 00:09:38.261 real 0m0.329s 00:09:38.261 user 0m0.112s 00:09:38.261 sys 0m0.118s 00:09:38.261 13:04:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.261 13:04:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:38.261 ************************************ 00:09:38.261 END TEST env_dpdk_post_init 00:09:38.261 ************************************ 00:09:38.517 13:04:25 env -- env/env.sh@26 -- # uname 00:09:38.517 13:04:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:38.517 13:04:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:38.517 13:04:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.517 13:04:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.517 13:04:25 env -- common/autotest_common.sh@10 -- # set +x 00:09:38.517 ************************************ 00:09:38.517 START TEST env_mem_callbacks 00:09:38.517 ************************************ 00:09:38.517 13:04:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:38.517 EAL: Detected CPU lcores: 10 00:09:38.517 EAL: Detected NUMA nodes: 1 00:09:38.517 EAL: Detected shared linkage of DPDK 00:09:38.517 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:38.517 EAL: Selected IOVA mode 'PA' 00:09:38.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:38.517 00:09:38.517 00:09:38.517 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.517 http://cunit.sourceforge.net/ 00:09:38.517 00:09:38.517 00:09:38.517 Suite: memory 00:09:38.517 Test: test ... 00:09:38.517 register 0x200000200000 2097152 00:09:38.517 malloc 3145728 00:09:38.517 register 0x200000400000 4194304 00:09:38.517 buf 0x2000004fffc0 len 3145728 PASSED 00:09:38.517 malloc 64 00:09:38.517 buf 0x2000004ffec0 len 64 PASSED 00:09:38.517 malloc 4194304 00:09:38.517 register 0x200000800000 6291456 00:09:38.517 buf 0x2000009fffc0 len 4194304 PASSED 00:09:38.517 free 0x2000004fffc0 3145728 00:09:38.517 free 0x2000004ffec0 64 00:09:38.517 unregister 0x200000400000 4194304 PASSED 00:09:38.517 free 0x2000009fffc0 4194304 00:09:38.775 unregister 0x200000800000 6291456 PASSED 00:09:38.775 malloc 8388608 00:09:38.775 register 0x200000400000 10485760 00:09:38.775 buf 0x2000005fffc0 len 8388608 PASSED 00:09:38.775 free 0x2000005fffc0 8388608 00:09:38.775 unregister 0x200000400000 10485760 PASSED 00:09:38.775 passed 00:09:38.775 00:09:38.775 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.775 suites 1 1 n/a 0 0 00:09:38.775 tests 1 1 1 0 0 00:09:38.775 asserts 15 15 15 0 n/a 00:09:38.775 00:09:38.775 Elapsed time = 0.062 seconds 00:09:38.775 00:09:38.775 real 0m0.258s 00:09:38.775 user 0m0.080s 00:09:38.775 sys 0m0.071s 00:09:38.775 ************************************ 00:09:38.775 END TEST env_mem_callbacks 00:09:38.775 ************************************ 00:09:38.775 13:04:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.775 13:04:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:38.775 ************************************ 00:09:38.775 END TEST env 00:09:38.775 ************************************ 00:09:38.775 00:09:38.775 real 0m9.612s 00:09:38.775 user 0m7.550s 00:09:38.775 sys 0m1.645s 00:09:38.775 13:04:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.775 13:04:25 env -- common/autotest_common.sh@10 -- # set +x 00:09:38.775 13:04:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:38.775 13:04:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.775 13:04:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.775 13:04:25 -- common/autotest_common.sh@10 -- # set +x 00:09:38.775 ************************************ 00:09:38.775 START TEST rpc 00:09:38.775 ************************************ 00:09:38.775 13:04:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:38.775 * Looking for test storage... 00:09:38.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:38.775 13:04:25 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.775 13:04:25 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.775 13:04:25 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.034 13:04:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.034 13:04:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.034 13:04:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.034 13:04:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.034 13:04:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.034 13:04:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.034 13:04:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.034 13:04:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:39.034 13:04:25 rpc -- scripts/common.sh@345 -- # : 1 00:09:39.034 13:04:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.034 13:04:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.034 13:04:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:39.034 13:04:25 rpc -- scripts/common.sh@353 -- # local d=1 00:09:39.034 13:04:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.034 13:04:25 rpc -- scripts/common.sh@355 -- # echo 1 00:09:39.034 13:04:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.034 13:04:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@353 -- # local d=2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.034 13:04:25 rpc -- scripts/common.sh@355 -- # echo 2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.034 13:04:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.034 13:04:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.034 13:04:25 rpc -- scripts/common.sh@368 -- # return 0 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.034 --rc genhtml_branch_coverage=1 00:09:39.034 --rc genhtml_function_coverage=1 00:09:39.034 --rc genhtml_legend=1 00:09:39.034 --rc geninfo_all_blocks=1 00:09:39.034 --rc geninfo_unexecuted_blocks=1 00:09:39.034 00:09:39.034 ' 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.034 --rc genhtml_branch_coverage=1 00:09:39.034 --rc genhtml_function_coverage=1 00:09:39.034 --rc genhtml_legend=1 00:09:39.034 --rc geninfo_all_blocks=1 00:09:39.034 --rc geninfo_unexecuted_blocks=1 00:09:39.034 00:09:39.034 ' 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.034 --rc genhtml_branch_coverage=1 00:09:39.034 --rc genhtml_function_coverage=1 00:09:39.034 --rc genhtml_legend=1 00:09:39.034 --rc geninfo_all_blocks=1 00:09:39.034 --rc geninfo_unexecuted_blocks=1 00:09:39.034 00:09:39.034 ' 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.034 --rc genhtml_branch_coverage=1 00:09:39.034 --rc genhtml_function_coverage=1 00:09:39.034 --rc genhtml_legend=1 00:09:39.034 --rc geninfo_all_blocks=1 00:09:39.034 --rc geninfo_unexecuted_blocks=1 00:09:39.034 00:09:39.034 ' 00:09:39.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.034 13:04:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57995 00:09:39.034 13:04:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:39.034 13:04:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.034 13:04:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57995 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 57995 ']' 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.034 13:04:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.291 [2024-12-06 13:04:26.052101] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:09:39.291 [2024-12-06 13:04:26.052347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57995 ] 00:09:39.291 [2024-12-06 13:04:26.260141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.549 [2024-12-06 13:04:26.413915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:39.549 [2024-12-06 13:04:26.414275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57995' to capture a snapshot of events at runtime. 00:09:39.549 [2024-12-06 13:04:26.414305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.549 [2024-12-06 13:04:26.414322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.549 [2024-12-06 13:04:26.414335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57995 for offline analysis/debug. 00:09:39.549 [2024-12-06 13:04:26.415772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.494 13:04:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.494 13:04:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:40.495 13:04:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:40.495 13:04:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:40.495 13:04:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:40.495 13:04:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:40.495 13:04:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.495 13:04:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.495 13:04:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 ************************************ 00:09:40.495 START TEST rpc_integrity 00:09:40.495 ************************************ 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:40.495 { 00:09:40.495 "name": "Malloc0", 00:09:40.495 "aliases": [ 00:09:40.495 "28574557-2c75-4583-9626-e4514ee0f24f" 00:09:40.495 ], 00:09:40.495 "product_name": "Malloc disk", 00:09:40.495 "block_size": 512, 00:09:40.495 "num_blocks": 16384, 00:09:40.495 "uuid": "28574557-2c75-4583-9626-e4514ee0f24f", 00:09:40.495 "assigned_rate_limits": { 00:09:40.495 "rw_ios_per_sec": 0, 00:09:40.495 "rw_mbytes_per_sec": 0, 00:09:40.495 "r_mbytes_per_sec": 0, 00:09:40.495 "w_mbytes_per_sec": 0 00:09:40.495 }, 00:09:40.495 "claimed": false, 00:09:40.495 "zoned": false, 00:09:40.495 "supported_io_types": { 00:09:40.495 "read": true, 00:09:40.495 "write": true, 00:09:40.495 "unmap": true, 00:09:40.495 "flush": true, 00:09:40.495 "reset": true, 00:09:40.495 "nvme_admin": false, 00:09:40.495 "nvme_io": false, 00:09:40.495 "nvme_io_md": false, 00:09:40.495 "write_zeroes": true, 00:09:40.495 "zcopy": true, 00:09:40.495 "get_zone_info": false, 00:09:40.495 "zone_management": false, 00:09:40.495 "zone_append": false, 00:09:40.495 "compare": false, 00:09:40.495 "compare_and_write": false, 00:09:40.495 "abort": true, 00:09:40.495 "seek_hole": false, 00:09:40.495 "seek_data": false, 00:09:40.495 "copy": true, 00:09:40.495 "nvme_iov_md": false 00:09:40.495 }, 00:09:40.495 "memory_domains": [ 00:09:40.495 { 00:09:40.495 "dma_device_id": "system", 00:09:40.495 "dma_device_type": 1 00:09:40.495 }, 00:09:40.495 { 00:09:40.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.495 "dma_device_type": 2 00:09:40.495 } 00:09:40.495 ], 00:09:40.495 "driver_specific": {} 00:09:40.495 } 00:09:40.495 ]' 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 [2024-12-06 13:04:27.488717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:40.495 [2024-12-06 13:04:27.488933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.495 [2024-12-06 13:04:27.488982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:40.495 [2024-12-06 13:04:27.489004] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.495 [2024-12-06 13:04:27.492493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.495 [2024-12-06 13:04:27.492606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:40.495 Passthru0 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.495 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.495 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.753 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.753 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:40.753 { 00:09:40.753 "name": "Malloc0", 00:09:40.753 "aliases": [ 00:09:40.753 "28574557-2c75-4583-9626-e4514ee0f24f" 00:09:40.753 ], 00:09:40.753 "product_name": "Malloc disk", 00:09:40.753 "block_size": 512, 00:09:40.753 "num_blocks": 16384, 00:09:40.753 "uuid": "28574557-2c75-4583-9626-e4514ee0f24f", 00:09:40.753 "assigned_rate_limits": { 00:09:40.753 "rw_ios_per_sec": 0, 00:09:40.753 "rw_mbytes_per_sec": 0, 00:09:40.753 "r_mbytes_per_sec": 0, 00:09:40.753 "w_mbytes_per_sec": 0 00:09:40.753 }, 00:09:40.753 "claimed": true, 00:09:40.753 "claim_type": "exclusive_write", 00:09:40.753 "zoned": false, 00:09:40.753 "supported_io_types": { 00:09:40.753 "read": true, 00:09:40.754 "write": true, 00:09:40.754 "unmap": true, 00:09:40.754 "flush": true, 00:09:40.754 "reset": true, 00:09:40.754 "nvme_admin": false, 00:09:40.754 "nvme_io": false, 00:09:40.754 "nvme_io_md": false, 00:09:40.754 "write_zeroes": true, 00:09:40.754 "zcopy": true, 00:09:40.754 "get_zone_info": false, 00:09:40.754 "zone_management": false, 00:09:40.754 "zone_append": false, 00:09:40.754 "compare": false, 00:09:40.754 "compare_and_write": false, 00:09:40.754 "abort": true, 00:09:40.754 "seek_hole": false, 00:09:40.754 "seek_data": false, 00:09:40.754 "copy": true, 00:09:40.754 "nvme_iov_md": false 00:09:40.754 }, 00:09:40.754 "memory_domains": [ 00:09:40.754 { 00:09:40.754 "dma_device_id": "system", 00:09:40.754 "dma_device_type": 1 00:09:40.754 }, 00:09:40.754 { 00:09:40.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.754 "dma_device_type": 2 00:09:40.754 } 00:09:40.754 ], 00:09:40.754 "driver_specific": {} 00:09:40.754 }, 00:09:40.754 { 00:09:40.754 "name": "Passthru0", 00:09:40.754 "aliases": [ 00:09:40.754 "18d0ebd5-2685-5c39-b651-d8b744e4097f" 00:09:40.754 ], 00:09:40.754 "product_name": "passthru", 00:09:40.754 "block_size": 512, 00:09:40.754 "num_blocks": 16384, 00:09:40.754 "uuid": "18d0ebd5-2685-5c39-b651-d8b744e4097f", 00:09:40.754 "assigned_rate_limits": { 00:09:40.754 "rw_ios_per_sec": 0, 00:09:40.754 "rw_mbytes_per_sec": 0, 00:09:40.754 "r_mbytes_per_sec": 0, 00:09:40.754 "w_mbytes_per_sec": 0 00:09:40.754 }, 00:09:40.754 "claimed": false, 00:09:40.754 "zoned": false, 00:09:40.754 "supported_io_types": { 00:09:40.754 "read": true, 00:09:40.754 "write": true, 00:09:40.754 "unmap": true, 00:09:40.754 "flush": true, 00:09:40.754 "reset": true, 00:09:40.754 "nvme_admin": false, 00:09:40.754 "nvme_io": false, 00:09:40.754 "nvme_io_md": false, 00:09:40.754 "write_zeroes": true, 00:09:40.754 "zcopy": true, 00:09:40.754 "get_zone_info": false, 00:09:40.754 "zone_management": false, 00:09:40.754 "zone_append": false, 00:09:40.754 "compare": false, 00:09:40.754 "compare_and_write": false, 00:09:40.754 "abort": true, 00:09:40.754 "seek_hole": false, 00:09:40.754 "seek_data": false, 00:09:40.754 "copy": true, 00:09:40.754 "nvme_iov_md": false 00:09:40.754 }, 00:09:40.754 "memory_domains": [ 00:09:40.754 { 00:09:40.754 "dma_device_id": "system", 00:09:40.754 "dma_device_type": 1 00:09:40.754 }, 00:09:40.754 { 00:09:40.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.754 "dma_device_type": 2 00:09:40.754 } 00:09:40.754 ], 00:09:40.754 "driver_specific": { 00:09:40.754 "passthru": { 00:09:40.754 "name": "Passthru0", 00:09:40.754 "base_bdev_name": "Malloc0" 00:09:40.754 } 00:09:40.754 } 00:09:40.754 } 00:09:40.754 ]' 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:40.754 ************************************ 00:09:40.754 END TEST rpc_integrity 00:09:40.754 ************************************ 00:09:40.754 13:04:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:40.754 00:09:40.754 real 0m0.354s 00:09:40.754 user 0m0.215s 00:09:40.754 sys 0m0.042s 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.754 13:04:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 13:04:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:40.754 13:04:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.754 13:04:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.754 13:04:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 ************************************ 00:09:40.754 START TEST rpc_plugins 00:09:40.754 ************************************ 00:09:40.754 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:40.754 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:40.754 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.754 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:40.754 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:40.754 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.754 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:40.754 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.754 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:40.754 { 00:09:40.754 "name": "Malloc1", 00:09:40.754 "aliases": [ 00:09:40.754 "b220d2f2-0495-44a4-b35f-7ab63dd0bff0" 00:09:40.754 ], 00:09:40.754 "product_name": "Malloc disk", 00:09:40.754 "block_size": 4096, 00:09:40.754 "num_blocks": 256, 00:09:40.754 "uuid": "b220d2f2-0495-44a4-b35f-7ab63dd0bff0", 00:09:40.754 "assigned_rate_limits": { 00:09:40.754 "rw_ios_per_sec": 0, 00:09:40.754 "rw_mbytes_per_sec": 0, 00:09:40.754 "r_mbytes_per_sec": 0, 00:09:40.754 "w_mbytes_per_sec": 0 00:09:40.754 }, 00:09:40.754 "claimed": false, 00:09:40.754 "zoned": false, 00:09:40.754 "supported_io_types": { 00:09:40.754 "read": true, 00:09:40.754 "write": true, 00:09:40.754 "unmap": true, 00:09:40.754 "flush": true, 00:09:40.754 "reset": true, 00:09:40.754 "nvme_admin": false, 00:09:40.754 "nvme_io": false, 00:09:40.754 "nvme_io_md": false, 00:09:40.754 "write_zeroes": true, 00:09:40.754 "zcopy": true, 00:09:40.754 "get_zone_info": false, 00:09:40.754 "zone_management": false, 00:09:40.754 "zone_append": false, 00:09:40.754 "compare": false, 00:09:40.754 "compare_and_write": false, 00:09:40.754 "abort": true, 00:09:40.754 "seek_hole": false, 00:09:40.754 "seek_data": false, 00:09:40.754 "copy": true, 00:09:40.754 "nvme_iov_md": false 00:09:40.754 }, 00:09:40.754 "memory_domains": [ 00:09:40.754 { 00:09:40.754 "dma_device_id": "system", 00:09:40.754 "dma_device_type": 1 00:09:40.754 }, 00:09:40.754 { 00:09:40.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.754 "dma_device_type": 2 00:09:40.754 } 00:09:40.754 ], 00:09:40.754 "driver_specific": {} 00:09:40.754 } 00:09:40.754 ]' 00:09:41.012 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:41.012 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:41.012 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.012 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.012 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:41.012 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:41.012 ************************************ 00:09:41.012 END TEST rpc_plugins 00:09:41.012 ************************************ 00:09:41.012 13:04:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:41.012 00:09:41.012 real 0m0.174s 00:09:41.012 user 0m0.105s 00:09:41.012 sys 0m0.025s 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.012 13:04:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:41.012 13:04:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:41.012 13:04:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.012 13:04:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.012 13:04:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.012 ************************************ 00:09:41.012 START TEST rpc_trace_cmd_test 00:09:41.012 ************************************ 00:09:41.012 13:04:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:41.012 13:04:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:41.012 13:04:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:41.012 13:04:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.012 13:04:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.012 13:04:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.012 13:04:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:41.012 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57995", 00:09:41.012 "tpoint_group_mask": "0x8", 00:09:41.012 "iscsi_conn": { 00:09:41.012 "mask": "0x2", 00:09:41.012 "tpoint_mask": "0x0" 00:09:41.012 }, 00:09:41.012 "scsi": { 00:09:41.013 "mask": "0x4", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "bdev": { 00:09:41.013 "mask": "0x8", 00:09:41.013 "tpoint_mask": "0xffffffffffffffff" 00:09:41.013 }, 00:09:41.013 "nvmf_rdma": { 00:09:41.013 "mask": "0x10", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "nvmf_tcp": { 00:09:41.013 "mask": "0x20", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "ftl": { 00:09:41.013 "mask": "0x40", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "blobfs": { 00:09:41.013 "mask": "0x80", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "dsa": { 00:09:41.013 "mask": "0x200", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "thread": { 00:09:41.013 "mask": "0x400", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "nvme_pcie": { 00:09:41.013 "mask": "0x800", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "iaa": { 00:09:41.013 "mask": "0x1000", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "nvme_tcp": { 00:09:41.013 "mask": "0x2000", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "bdev_nvme": { 00:09:41.013 "mask": "0x4000", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "sock": { 00:09:41.013 "mask": "0x8000", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "blob": { 00:09:41.013 "mask": "0x10000", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "bdev_raid": { 00:09:41.013 "mask": "0x20000", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 }, 00:09:41.013 "scheduler": { 00:09:41.013 "mask": "0x40000", 00:09:41.013 "tpoint_mask": "0x0" 00:09:41.013 } 00:09:41.013 }' 00:09:41.013 13:04:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:41.271 ************************************ 00:09:41.271 END TEST rpc_trace_cmd_test 00:09:41.271 ************************************ 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:41.271 00:09:41.271 real 0m0.281s 00:09:41.271 user 0m0.246s 00:09:41.271 sys 0m0.025s 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.271 13:04:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.271 13:04:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:41.271 13:04:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:41.271 13:04:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:41.271 13:04:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.271 13:04:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.271 13:04:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.271 ************************************ 00:09:41.271 START TEST rpc_daemon_integrity 00:09:41.271 ************************************ 00:09:41.271 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:41.271 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:41.271 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.271 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:41.529 { 00:09:41.529 "name": "Malloc2", 00:09:41.529 "aliases": [ 00:09:41.529 "8ca75571-249c-4d54-9df7-bd3041825b46" 00:09:41.529 ], 00:09:41.529 "product_name": "Malloc disk", 00:09:41.529 "block_size": 512, 00:09:41.529 "num_blocks": 16384, 00:09:41.529 "uuid": "8ca75571-249c-4d54-9df7-bd3041825b46", 00:09:41.529 "assigned_rate_limits": { 00:09:41.529 "rw_ios_per_sec": 0, 00:09:41.529 "rw_mbytes_per_sec": 0, 00:09:41.529 "r_mbytes_per_sec": 0, 00:09:41.529 "w_mbytes_per_sec": 0 00:09:41.529 }, 00:09:41.529 "claimed": false, 00:09:41.529 "zoned": false, 00:09:41.529 "supported_io_types": { 00:09:41.529 "read": true, 00:09:41.529 "write": true, 00:09:41.529 "unmap": true, 00:09:41.529 "flush": true, 00:09:41.529 "reset": true, 00:09:41.529 "nvme_admin": false, 00:09:41.529 "nvme_io": false, 00:09:41.529 "nvme_io_md": false, 00:09:41.529 "write_zeroes": true, 00:09:41.529 "zcopy": true, 00:09:41.529 "get_zone_info": false, 00:09:41.529 "zone_management": false, 00:09:41.529 "zone_append": false, 00:09:41.529 "compare": false, 00:09:41.529 "compare_and_write": false, 00:09:41.529 "abort": true, 00:09:41.529 "seek_hole": false, 00:09:41.529 "seek_data": false, 00:09:41.529 "copy": true, 00:09:41.529 "nvme_iov_md": false 00:09:41.529 }, 00:09:41.529 "memory_domains": [ 00:09:41.529 { 00:09:41.529 "dma_device_id": "system", 00:09:41.529 "dma_device_type": 1 00:09:41.529 }, 00:09:41.529 { 00:09:41.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.529 "dma_device_type": 2 00:09:41.529 } 00:09:41.529 ], 00:09:41.529 "driver_specific": {} 00:09:41.529 } 00:09:41.529 ]' 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 [2024-12-06 13:04:28.437259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:41.529 [2024-12-06 13:04:28.437344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.529 [2024-12-06 13:04:28.437380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:41.529 [2024-12-06 13:04:28.437399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.529 [2024-12-06 13:04:28.440499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.529 [2024-12-06 13:04:28.440738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:41.529 Passthru0 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.529 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:41.529 { 00:09:41.529 "name": "Malloc2", 00:09:41.529 "aliases": [ 00:09:41.529 "8ca75571-249c-4d54-9df7-bd3041825b46" 00:09:41.529 ], 00:09:41.530 "product_name": "Malloc disk", 00:09:41.530 "block_size": 512, 00:09:41.530 "num_blocks": 16384, 00:09:41.530 "uuid": "8ca75571-249c-4d54-9df7-bd3041825b46", 00:09:41.530 "assigned_rate_limits": { 00:09:41.530 "rw_ios_per_sec": 0, 00:09:41.530 "rw_mbytes_per_sec": 0, 00:09:41.530 "r_mbytes_per_sec": 0, 00:09:41.530 "w_mbytes_per_sec": 0 00:09:41.530 }, 00:09:41.530 "claimed": true, 00:09:41.530 "claim_type": "exclusive_write", 00:09:41.530 "zoned": false, 00:09:41.530 "supported_io_types": { 00:09:41.530 "read": true, 00:09:41.530 "write": true, 00:09:41.530 "unmap": true, 00:09:41.530 "flush": true, 00:09:41.530 "reset": true, 00:09:41.530 "nvme_admin": false, 00:09:41.530 "nvme_io": false, 00:09:41.530 "nvme_io_md": false, 00:09:41.530 "write_zeroes": true, 00:09:41.530 "zcopy": true, 00:09:41.530 "get_zone_info": false, 00:09:41.530 "zone_management": false, 00:09:41.530 "zone_append": false, 00:09:41.530 "compare": false, 00:09:41.530 "compare_and_write": false, 00:09:41.530 "abort": true, 00:09:41.530 "seek_hole": false, 00:09:41.530 "seek_data": false, 00:09:41.530 "copy": true, 00:09:41.530 "nvme_iov_md": false 00:09:41.530 }, 00:09:41.530 "memory_domains": [ 00:09:41.530 { 00:09:41.530 "dma_device_id": "system", 00:09:41.530 "dma_device_type": 1 00:09:41.530 }, 00:09:41.530 { 00:09:41.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.530 "dma_device_type": 2 00:09:41.530 } 00:09:41.530 ], 00:09:41.530 "driver_specific": {} 00:09:41.530 }, 00:09:41.530 { 00:09:41.530 "name": "Passthru0", 00:09:41.530 "aliases": [ 00:09:41.530 "13e42a78-8141-5b1c-8a33-5b89bf09dcf7" 00:09:41.530 ], 00:09:41.530 "product_name": "passthru", 00:09:41.530 "block_size": 512, 00:09:41.530 "num_blocks": 16384, 00:09:41.530 "uuid": "13e42a78-8141-5b1c-8a33-5b89bf09dcf7", 00:09:41.530 "assigned_rate_limits": { 00:09:41.530 "rw_ios_per_sec": 0, 00:09:41.530 "rw_mbytes_per_sec": 0, 00:09:41.530 "r_mbytes_per_sec": 0, 00:09:41.530 "w_mbytes_per_sec": 0 00:09:41.530 }, 00:09:41.530 "claimed": false, 00:09:41.530 "zoned": false, 00:09:41.530 "supported_io_types": { 00:09:41.530 "read": true, 00:09:41.530 "write": true, 00:09:41.530 "unmap": true, 00:09:41.530 "flush": true, 00:09:41.530 "reset": true, 00:09:41.530 "nvme_admin": false, 00:09:41.530 "nvme_io": false, 00:09:41.530 "nvme_io_md": false, 00:09:41.530 "write_zeroes": true, 00:09:41.530 "zcopy": true, 00:09:41.530 "get_zone_info": false, 00:09:41.530 "zone_management": false, 00:09:41.530 "zone_append": false, 00:09:41.530 "compare": false, 00:09:41.530 "compare_and_write": false, 00:09:41.530 "abort": true, 00:09:41.530 "seek_hole": false, 00:09:41.530 "seek_data": false, 00:09:41.530 "copy": true, 00:09:41.530 "nvme_iov_md": false 00:09:41.530 }, 00:09:41.530 "memory_domains": [ 00:09:41.530 { 00:09:41.530 "dma_device_id": "system", 00:09:41.530 "dma_device_type": 1 00:09:41.530 }, 00:09:41.530 { 00:09:41.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.530 "dma_device_type": 2 00:09:41.530 } 00:09:41.530 ], 00:09:41.530 "driver_specific": { 00:09:41.530 "passthru": { 00:09:41.530 "name": "Passthru0", 00:09:41.530 "base_bdev_name": "Malloc2" 00:09:41.530 } 00:09:41.530 } 00:09:41.530 } 00:09:41.530 ]' 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.530 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:41.787 ************************************ 00:09:41.787 END TEST rpc_daemon_integrity 00:09:41.787 ************************************ 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:41.787 00:09:41.787 real 0m0.348s 00:09:41.787 user 0m0.215s 00:09:41.787 sys 0m0.039s 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.787 13:04:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 13:04:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:41.787 13:04:28 rpc -- rpc/rpc.sh@84 -- # killprocess 57995 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 57995 ']' 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@958 -- # kill -0 57995 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@959 -- # uname 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57995 00:09:41.787 killing process with pid 57995 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57995' 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@973 -- # kill 57995 00:09:41.787 13:04:28 rpc -- common/autotest_common.sh@978 -- # wait 57995 00:09:44.347 00:09:44.347 real 0m5.234s 00:09:44.347 user 0m5.843s 00:09:44.347 sys 0m1.003s 00:09:44.347 ************************************ 00:09:44.347 END TEST rpc 00:09:44.347 ************************************ 00:09:44.347 13:04:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.347 13:04:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.347 13:04:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:44.347 13:04:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.347 13:04:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.347 13:04:30 -- common/autotest_common.sh@10 -- # set +x 00:09:44.347 ************************************ 00:09:44.347 START TEST skip_rpc 00:09:44.347 ************************************ 00:09:44.347 13:04:30 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:44.347 * Looking for test storage... 00:09:44.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.348 13:04:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.348 --rc genhtml_branch_coverage=1 00:09:44.348 --rc genhtml_function_coverage=1 00:09:44.348 --rc genhtml_legend=1 00:09:44.348 --rc geninfo_all_blocks=1 00:09:44.348 --rc geninfo_unexecuted_blocks=1 00:09:44.348 00:09:44.348 ' 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.348 --rc genhtml_branch_coverage=1 00:09:44.348 --rc genhtml_function_coverage=1 00:09:44.348 --rc genhtml_legend=1 00:09:44.348 --rc geninfo_all_blocks=1 00:09:44.348 --rc geninfo_unexecuted_blocks=1 00:09:44.348 00:09:44.348 ' 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.348 --rc genhtml_branch_coverage=1 00:09:44.348 --rc genhtml_function_coverage=1 00:09:44.348 --rc genhtml_legend=1 00:09:44.348 --rc geninfo_all_blocks=1 00:09:44.348 --rc geninfo_unexecuted_blocks=1 00:09:44.348 00:09:44.348 ' 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.348 --rc genhtml_branch_coverage=1 00:09:44.348 --rc genhtml_function_coverage=1 00:09:44.348 --rc genhtml_legend=1 00:09:44.348 --rc geninfo_all_blocks=1 00:09:44.348 --rc geninfo_unexecuted_blocks=1 00:09:44.348 00:09:44.348 ' 00:09:44.348 13:04:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:44.348 13:04:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:44.348 13:04:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.348 13:04:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.348 ************************************ 00:09:44.348 START TEST skip_rpc 00:09:44.348 ************************************ 00:09:44.348 13:04:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:44.348 13:04:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58224 00:09:44.348 13:04:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:44.348 13:04:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.348 13:04:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:44.348 [2024-12-06 13:04:31.312965] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:09:44.348 [2024-12-06 13:04:31.313423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58224 ] 00:09:44.607 [2024-12-06 13:04:31.505554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.865 [2024-12-06 13:04:31.655098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58224 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58224 ']' 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58224 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58224 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.145 killing process with pid 58224 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58224' 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58224 00:09:50.145 13:04:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58224 00:09:51.517 00:09:51.517 real 0m7.156s 00:09:51.517 user 0m6.578s 00:09:51.517 sys 0m0.470s 00:09:51.517 13:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.517 ************************************ 00:09:51.517 END TEST skip_rpc 00:09:51.517 ************************************ 00:09:51.517 13:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.517 13:04:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:51.517 13:04:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.517 13:04:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.517 13:04:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.517 ************************************ 00:09:51.517 START TEST skip_rpc_with_json 00:09:51.517 ************************************ 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:51.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58328 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58328 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58328 ']' 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.517 13:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:51.517 [2024-12-06 13:04:38.527299] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:09:51.517 [2024-12-06 13:04:38.527831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58328 ] 00:09:51.775 [2024-12-06 13:04:38.716562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.051 [2024-12-06 13:04:38.860736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:53.009 [2024-12-06 13:04:39.695663] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:53.009 request: 00:09:53.009 { 00:09:53.009 "trtype": "tcp", 00:09:53.009 "method": "nvmf_get_transports", 00:09:53.009 "req_id": 1 00:09:53.009 } 00:09:53.009 Got JSON-RPC error response 00:09:53.009 response: 00:09:53.009 { 00:09:53.009 "code": -19, 00:09:53.009 "message": "No such device" 00:09:53.009 } 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:53.009 [2024-12-06 13:04:39.707811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.009 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:53.009 { 00:09:53.009 "subsystems": [ 00:09:53.009 { 00:09:53.009 "subsystem": "fsdev", 00:09:53.009 "config": [ 00:09:53.009 { 00:09:53.009 "method": "fsdev_set_opts", 00:09:53.009 "params": { 00:09:53.009 "fsdev_io_pool_size": 65535, 00:09:53.009 "fsdev_io_cache_size": 256 00:09:53.009 } 00:09:53.009 } 00:09:53.009 ] 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "subsystem": "keyring", 00:09:53.009 "config": [] 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "subsystem": "iobuf", 00:09:53.009 "config": [ 00:09:53.009 { 00:09:53.009 "method": "iobuf_set_options", 00:09:53.009 "params": { 00:09:53.009 "small_pool_count": 8192, 00:09:53.009 "large_pool_count": 1024, 00:09:53.009 "small_bufsize": 8192, 00:09:53.009 "large_bufsize": 135168, 00:09:53.009 "enable_numa": false 00:09:53.009 } 00:09:53.009 } 00:09:53.009 ] 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "subsystem": "sock", 00:09:53.009 "config": [ 00:09:53.009 { 00:09:53.009 "method": "sock_set_default_impl", 00:09:53.009 "params": { 00:09:53.009 "impl_name": "posix" 00:09:53.009 } 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "method": "sock_impl_set_options", 00:09:53.009 "params": { 00:09:53.009 "impl_name": "ssl", 00:09:53.009 "recv_buf_size": 4096, 00:09:53.009 "send_buf_size": 4096, 00:09:53.009 "enable_recv_pipe": true, 00:09:53.009 "enable_quickack": false, 00:09:53.009 "enable_placement_id": 0, 00:09:53.009 "enable_zerocopy_send_server": true, 00:09:53.009 "enable_zerocopy_send_client": false, 00:09:53.009 "zerocopy_threshold": 0, 00:09:53.009 "tls_version": 0, 00:09:53.009 "enable_ktls": false 00:09:53.009 } 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "method": "sock_impl_set_options", 00:09:53.009 "params": { 00:09:53.009 "impl_name": "posix", 00:09:53.009 "recv_buf_size": 2097152, 00:09:53.009 "send_buf_size": 2097152, 00:09:53.009 "enable_recv_pipe": true, 00:09:53.009 "enable_quickack": false, 00:09:53.009 "enable_placement_id": 0, 00:09:53.009 "enable_zerocopy_send_server": true, 00:09:53.009 "enable_zerocopy_send_client": false, 00:09:53.009 "zerocopy_threshold": 0, 00:09:53.009 "tls_version": 0, 00:09:53.009 "enable_ktls": false 00:09:53.009 } 00:09:53.009 } 00:09:53.009 ] 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "subsystem": "vmd", 00:09:53.009 "config": [] 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "subsystem": "accel", 00:09:53.009 "config": [ 00:09:53.009 { 00:09:53.009 "method": "accel_set_options", 00:09:53.009 "params": { 00:09:53.009 "small_cache_size": 128, 00:09:53.009 "large_cache_size": 16, 00:09:53.009 "task_count": 2048, 00:09:53.009 "sequence_count": 2048, 00:09:53.009 "buf_count": 2048 00:09:53.009 } 00:09:53.009 } 00:09:53.009 ] 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "subsystem": "bdev", 00:09:53.009 "config": [ 00:09:53.009 { 00:09:53.009 "method": "bdev_set_options", 00:09:53.009 "params": { 00:09:53.009 "bdev_io_pool_size": 65535, 00:09:53.009 "bdev_io_cache_size": 256, 00:09:53.009 "bdev_auto_examine": true, 00:09:53.009 "iobuf_small_cache_size": 128, 00:09:53.009 "iobuf_large_cache_size": 16 00:09:53.009 } 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "method": "bdev_raid_set_options", 00:09:53.009 "params": { 00:09:53.009 "process_window_size_kb": 1024, 00:09:53.009 "process_max_bandwidth_mb_sec": 0 00:09:53.009 } 00:09:53.009 }, 00:09:53.009 { 00:09:53.009 "method": "bdev_iscsi_set_options", 00:09:53.009 "params": { 00:09:53.009 "timeout_sec": 30 00:09:53.009 } 00:09:53.009 }, 00:09:53.009 { 00:09:53.010 "method": "bdev_nvme_set_options", 00:09:53.010 "params": { 00:09:53.010 "action_on_timeout": "none", 00:09:53.010 "timeout_us": 0, 00:09:53.010 "timeout_admin_us": 0, 00:09:53.010 "keep_alive_timeout_ms": 10000, 00:09:53.010 "arbitration_burst": 0, 00:09:53.010 "low_priority_weight": 0, 00:09:53.010 "medium_priority_weight": 0, 00:09:53.010 "high_priority_weight": 0, 00:09:53.010 "nvme_adminq_poll_period_us": 10000, 00:09:53.010 "nvme_ioq_poll_period_us": 0, 00:09:53.010 "io_queue_requests": 0, 00:09:53.010 "delay_cmd_submit": true, 00:09:53.010 "transport_retry_count": 4, 00:09:53.010 "bdev_retry_count": 3, 00:09:53.010 "transport_ack_timeout": 0, 00:09:53.010 "ctrlr_loss_timeout_sec": 0, 00:09:53.010 "reconnect_delay_sec": 0, 00:09:53.010 "fast_io_fail_timeout_sec": 0, 00:09:53.010 "disable_auto_failback": false, 00:09:53.010 "generate_uuids": false, 00:09:53.010 "transport_tos": 0, 00:09:53.010 "nvme_error_stat": false, 00:09:53.010 "rdma_srq_size": 0, 00:09:53.010 "io_path_stat": false, 00:09:53.010 "allow_accel_sequence": false, 00:09:53.010 "rdma_max_cq_size": 0, 00:09:53.010 "rdma_cm_event_timeout_ms": 0, 00:09:53.010 "dhchap_digests": [ 00:09:53.010 "sha256", 00:09:53.010 "sha384", 00:09:53.010 "sha512" 00:09:53.010 ], 00:09:53.010 "dhchap_dhgroups": [ 00:09:53.010 "null", 00:09:53.010 "ffdhe2048", 00:09:53.010 "ffdhe3072", 00:09:53.010 "ffdhe4096", 00:09:53.010 "ffdhe6144", 00:09:53.010 "ffdhe8192" 00:09:53.010 ] 00:09:53.010 } 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "method": "bdev_nvme_set_hotplug", 00:09:53.010 "params": { 00:09:53.010 "period_us": 100000, 00:09:53.010 "enable": false 00:09:53.010 } 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "method": "bdev_wait_for_examine" 00:09:53.010 } 00:09:53.010 ] 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "scsi", 00:09:53.010 "config": null 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "scheduler", 00:09:53.010 "config": [ 00:09:53.010 { 00:09:53.010 "method": "framework_set_scheduler", 00:09:53.010 "params": { 00:09:53.010 "name": "static" 00:09:53.010 } 00:09:53.010 } 00:09:53.010 ] 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "vhost_scsi", 00:09:53.010 "config": [] 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "vhost_blk", 00:09:53.010 "config": [] 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "ublk", 00:09:53.010 "config": [] 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "nbd", 00:09:53.010 "config": [] 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "nvmf", 00:09:53.010 "config": [ 00:09:53.010 { 00:09:53.010 "method": "nvmf_set_config", 00:09:53.010 "params": { 00:09:53.010 "discovery_filter": "match_any", 00:09:53.010 "admin_cmd_passthru": { 00:09:53.010 "identify_ctrlr": false 00:09:53.010 }, 00:09:53.010 "dhchap_digests": [ 00:09:53.010 "sha256", 00:09:53.010 "sha384", 00:09:53.010 "sha512" 00:09:53.010 ], 00:09:53.010 "dhchap_dhgroups": [ 00:09:53.010 "null", 00:09:53.010 "ffdhe2048", 00:09:53.010 "ffdhe3072", 00:09:53.010 "ffdhe4096", 00:09:53.010 "ffdhe6144", 00:09:53.010 "ffdhe8192" 00:09:53.010 ] 00:09:53.010 } 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "method": "nvmf_set_max_subsystems", 00:09:53.010 "params": { 00:09:53.010 "max_subsystems": 1024 00:09:53.010 } 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "method": "nvmf_set_crdt", 00:09:53.010 "params": { 00:09:53.010 "crdt1": 0, 00:09:53.010 "crdt2": 0, 00:09:53.010 "crdt3": 0 00:09:53.010 } 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "method": "nvmf_create_transport", 00:09:53.010 "params": { 00:09:53.010 "trtype": "TCP", 00:09:53.010 "max_queue_depth": 128, 00:09:53.010 "max_io_qpairs_per_ctrlr": 127, 00:09:53.010 "in_capsule_data_size": 4096, 00:09:53.010 "max_io_size": 131072, 00:09:53.010 "io_unit_size": 131072, 00:09:53.010 "max_aq_depth": 128, 00:09:53.010 "num_shared_buffers": 511, 00:09:53.010 "buf_cache_size": 4294967295, 00:09:53.010 "dif_insert_or_strip": false, 00:09:53.010 "zcopy": false, 00:09:53.010 "c2h_success": true, 00:09:53.010 "sock_priority": 0, 00:09:53.010 "abort_timeout_sec": 1, 00:09:53.010 "ack_timeout": 0, 00:09:53.010 "data_wr_pool_size": 0 00:09:53.010 } 00:09:53.010 } 00:09:53.010 ] 00:09:53.010 }, 00:09:53.010 { 00:09:53.010 "subsystem": "iscsi", 00:09:53.010 "config": [ 00:09:53.010 { 00:09:53.010 "method": "iscsi_set_options", 00:09:53.010 "params": { 00:09:53.010 "node_base": "iqn.2016-06.io.spdk", 00:09:53.010 "max_sessions": 128, 00:09:53.010 "max_connections_per_session": 2, 00:09:53.010 "max_queue_depth": 64, 00:09:53.010 "default_time2wait": 2, 00:09:53.010 "default_time2retain": 20, 00:09:53.010 "first_burst_length": 8192, 00:09:53.010 "immediate_data": true, 00:09:53.010 "allow_duplicated_isid": false, 00:09:53.010 "error_recovery_level": 0, 00:09:53.010 "nop_timeout": 60, 00:09:53.010 "nop_in_interval": 30, 00:09:53.010 "disable_chap": false, 00:09:53.010 "require_chap": false, 00:09:53.010 "mutual_chap": false, 00:09:53.010 "chap_group": 0, 00:09:53.010 "max_large_datain_per_connection": 64, 00:09:53.010 "max_r2t_per_connection": 4, 00:09:53.010 "pdu_pool_size": 36864, 00:09:53.010 "immediate_data_pool_size": 16384, 00:09:53.010 "data_out_pool_size": 2048 00:09:53.010 } 00:09:53.010 } 00:09:53.010 ] 00:09:53.010 } 00:09:53.010 ] 00:09:53.010 } 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58328 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58328 ']' 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58328 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58328 00:09:53.010 killing process with pid 58328 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58328' 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58328 00:09:53.010 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58328 00:09:55.537 13:04:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58378 00:09:55.537 13:04:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:55.537 13:04:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58378 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58378 ']' 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58378 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58378 00:10:00.794 killing process with pid 58378 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58378' 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58378 00:10:00.794 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58378 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:02.695 00:10:02.695 real 0m10.880s 00:10:02.695 user 0m10.326s 00:10:02.695 sys 0m0.990s 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 ************************************ 00:10:02.695 END TEST skip_rpc_with_json 00:10:02.695 ************************************ 00:10:02.695 13:04:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:02.695 13:04:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.695 13:04:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.695 13:04:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 ************************************ 00:10:02.695 START TEST skip_rpc_with_delay 00:10:02.695 ************************************ 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:02.695 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:02.696 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:02.696 [2024-12-06 13:04:49.479128] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:02.696 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:02.696 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.696 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.696 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.696 00:10:02.696 real 0m0.230s 00:10:02.696 user 0m0.112s 00:10:02.696 sys 0m0.115s 00:10:02.696 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.696 13:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:02.696 ************************************ 00:10:02.696 END TEST skip_rpc_with_delay 00:10:02.696 ************************************ 00:10:02.696 13:04:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:02.696 13:04:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:02.696 13:04:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:02.696 13:04:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.696 13:04:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.696 13:04:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.696 ************************************ 00:10:02.696 START TEST exit_on_failed_rpc_init 00:10:02.696 ************************************ 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58512 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58512 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58512 ']' 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.696 13:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:02.953 [2024-12-06 13:04:49.743286] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:02.954 [2024-12-06 13:04:49.743477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58512 ] 00:10:02.954 [2024-12-06 13:04:49.934643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.212 [2024-12-06 13:04:50.086941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:04.145 13:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:04.145 [2024-12-06 13:04:51.090850] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:04.145 [2024-12-06 13:04:51.091026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58530 ] 00:10:04.403 [2024-12-06 13:04:51.275438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.659 [2024-12-06 13:04:51.436454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.659 [2024-12-06 13:04:51.436589] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:04.659 [2024-12-06 13:04:51.436615] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:04.659 [2024-12-06 13:04:51.436643] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58512 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58512 ']' 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58512 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58512 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58512' 00:10:04.918 killing process with pid 58512 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58512 00:10:04.918 13:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58512 00:10:07.456 00:10:07.456 real 0m4.334s 00:10:07.456 user 0m4.765s 00:10:07.456 sys 0m0.697s 00:10:07.456 13:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.456 13:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 ************************************ 00:10:07.456 END TEST exit_on_failed_rpc_init 00:10:07.456 ************************************ 00:10:07.456 13:04:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:07.456 00:10:07.456 real 0m23.017s 00:10:07.456 user 0m21.953s 00:10:07.456 sys 0m2.499s 00:10:07.456 13:04:53 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.456 13:04:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 ************************************ 00:10:07.456 END TEST skip_rpc 00:10:07.456 ************************************ 00:10:07.456 13:04:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:07.456 13:04:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.456 13:04:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.456 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 ************************************ 00:10:07.456 START TEST rpc_client 00:10:07.456 ************************************ 00:10:07.456 13:04:54 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:07.456 * Looking for test storage... 00:10:07.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:07.456 13:04:54 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.456 13:04:54 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.456 13:04:54 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.456 13:04:54 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:07.456 13:04:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.457 13:04:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.457 13:04:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.457 13:04:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:07.457 13:04:54 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.457 13:04:54 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.457 --rc genhtml_branch_coverage=1 00:10:07.457 --rc genhtml_function_coverage=1 00:10:07.457 --rc genhtml_legend=1 00:10:07.457 --rc geninfo_all_blocks=1 00:10:07.457 --rc geninfo_unexecuted_blocks=1 00:10:07.457 00:10:07.457 ' 00:10:07.457 13:04:54 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.457 --rc genhtml_branch_coverage=1 00:10:07.457 --rc genhtml_function_coverage=1 00:10:07.457 --rc genhtml_legend=1 00:10:07.457 --rc geninfo_all_blocks=1 00:10:07.457 --rc geninfo_unexecuted_blocks=1 00:10:07.457 00:10:07.457 ' 00:10:07.457 13:04:54 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.457 --rc genhtml_branch_coverage=1 00:10:07.457 --rc genhtml_function_coverage=1 00:10:07.457 --rc genhtml_legend=1 00:10:07.457 --rc geninfo_all_blocks=1 00:10:07.457 --rc geninfo_unexecuted_blocks=1 00:10:07.457 00:10:07.457 ' 00:10:07.457 13:04:54 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.457 --rc genhtml_branch_coverage=1 00:10:07.457 --rc genhtml_function_coverage=1 00:10:07.457 --rc genhtml_legend=1 00:10:07.457 --rc geninfo_all_blocks=1 00:10:07.457 --rc geninfo_unexecuted_blocks=1 00:10:07.457 00:10:07.457 ' 00:10:07.457 13:04:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:07.457 OK 00:10:07.457 13:04:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:07.457 00:10:07.457 real 0m0.261s 00:10:07.457 user 0m0.148s 00:10:07.457 sys 0m0.123s 00:10:07.457 13:04:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.457 13:04:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 ************************************ 00:10:07.457 END TEST rpc_client 00:10:07.457 ************************************ 00:10:07.457 13:04:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:07.457 13:04:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.457 13:04:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.457 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 ************************************ 00:10:07.457 START TEST json_config 00:10:07.457 ************************************ 00:10:07.457 13:04:54 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:07.457 13:04:54 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.457 13:04:54 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.457 13:04:54 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.717 13:04:54 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.717 13:04:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.717 13:04:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.717 13:04:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.717 13:04:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.717 13:04:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.717 13:04:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.717 13:04:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.717 13:04:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:07.717 13:04:54 json_config -- scripts/common.sh@345 -- # : 1 00:10:07.717 13:04:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.717 13:04:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.717 13:04:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:07.717 13:04:54 json_config -- scripts/common.sh@353 -- # local d=1 00:10:07.717 13:04:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.717 13:04:54 json_config -- scripts/common.sh@355 -- # echo 1 00:10:07.717 13:04:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.717 13:04:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@353 -- # local d=2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.717 13:04:54 json_config -- scripts/common.sh@355 -- # echo 2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.717 13:04:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.717 13:04:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.717 13:04:54 json_config -- scripts/common.sh@368 -- # return 0 00:10:07.717 13:04:54 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.717 13:04:54 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.717 --rc genhtml_branch_coverage=1 00:10:07.717 --rc genhtml_function_coverage=1 00:10:07.717 --rc genhtml_legend=1 00:10:07.717 --rc geninfo_all_blocks=1 00:10:07.717 --rc geninfo_unexecuted_blocks=1 00:10:07.717 00:10:07.717 ' 00:10:07.717 13:04:54 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.717 --rc genhtml_branch_coverage=1 00:10:07.717 --rc genhtml_function_coverage=1 00:10:07.717 --rc genhtml_legend=1 00:10:07.717 --rc geninfo_all_blocks=1 00:10:07.717 --rc geninfo_unexecuted_blocks=1 00:10:07.717 00:10:07.717 ' 00:10:07.717 13:04:54 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.717 --rc genhtml_branch_coverage=1 00:10:07.717 --rc genhtml_function_coverage=1 00:10:07.717 --rc genhtml_legend=1 00:10:07.717 --rc geninfo_all_blocks=1 00:10:07.717 --rc geninfo_unexecuted_blocks=1 00:10:07.717 00:10:07.717 ' 00:10:07.717 13:04:54 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.717 --rc genhtml_branch_coverage=1 00:10:07.717 --rc genhtml_function_coverage=1 00:10:07.717 --rc genhtml_legend=1 00:10:07.717 --rc geninfo_all_blocks=1 00:10:07.717 --rc geninfo_unexecuted_blocks=1 00:10:07.717 00:10:07.717 ' 00:10:07.717 13:04:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fc37e1a9-b301-4ee2-b448-5efe352245f6 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=fc37e1a9-b301-4ee2-b448-5efe352245f6 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.717 13:04:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.717 13:04:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.717 13:04:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.717 13:04:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.717 13:04:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.717 13:04:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.717 13:04:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.717 13:04:54 json_config -- paths/export.sh@5 -- # export PATH 00:10:07.717 13:04:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@51 -- # : 0 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.717 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.717 13:04:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.717 13:04:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:07.717 13:04:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:07.717 13:04:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:07.717 WARNING: No tests are enabled so not running JSON configuration tests 00:10:07.717 13:04:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:07.717 13:04:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:07.718 13:04:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:07.718 13:04:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:07.718 00:10:07.718 real 0m0.189s 00:10:07.718 user 0m0.121s 00:10:07.718 sys 0m0.073s 00:10:07.718 13:04:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.718 ************************************ 00:10:07.718 END TEST json_config 00:10:07.718 13:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:07.718 ************************************ 00:10:07.718 13:04:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:07.718 13:04:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.718 13:04:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.718 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:10:07.718 ************************************ 00:10:07.718 START TEST json_config_extra_key 00:10:07.718 ************************************ 00:10:07.718 13:04:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:07.718 13:04:54 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.718 13:04:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.718 13:04:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.977 13:04:54 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:07.977 13:04:54 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.977 13:04:54 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.977 --rc genhtml_branch_coverage=1 00:10:07.977 --rc genhtml_function_coverage=1 00:10:07.977 --rc genhtml_legend=1 00:10:07.977 --rc geninfo_all_blocks=1 00:10:07.977 --rc geninfo_unexecuted_blocks=1 00:10:07.977 00:10:07.977 ' 00:10:07.977 13:04:54 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.977 --rc genhtml_branch_coverage=1 00:10:07.977 --rc genhtml_function_coverage=1 00:10:07.977 --rc genhtml_legend=1 00:10:07.977 --rc geninfo_all_blocks=1 00:10:07.977 --rc geninfo_unexecuted_blocks=1 00:10:07.977 00:10:07.977 ' 00:10:07.977 13:04:54 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.977 --rc genhtml_branch_coverage=1 00:10:07.977 --rc genhtml_function_coverage=1 00:10:07.977 --rc genhtml_legend=1 00:10:07.977 --rc geninfo_all_blocks=1 00:10:07.977 --rc geninfo_unexecuted_blocks=1 00:10:07.977 00:10:07.977 ' 00:10:07.977 13:04:54 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.977 --rc genhtml_branch_coverage=1 00:10:07.977 --rc genhtml_function_coverage=1 00:10:07.977 --rc genhtml_legend=1 00:10:07.977 --rc geninfo_all_blocks=1 00:10:07.977 --rc geninfo_unexecuted_blocks=1 00:10:07.977 00:10:07.977 ' 00:10:07.977 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fc37e1a9-b301-4ee2-b448-5efe352245f6 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=fc37e1a9-b301-4ee2-b448-5efe352245f6 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.977 13:04:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.977 13:04:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.977 13:04:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.977 13:04:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.977 13:04:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:07.977 13:04:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.977 13:04:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.977 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:07.977 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:07.977 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:07.978 INFO: launching applications... 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:07.978 13:04:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58741 00:10:07.978 Waiting for target to run... 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:07.978 13:04:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58741 /var/tmp/spdk_tgt.sock 00:10:07.978 13:04:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58741 ']' 00:10:07.978 13:04:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:07.978 13:04:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:07.978 13:04:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:07.978 13:04:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.978 13:04:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:07.978 [2024-12-06 13:04:54.926655] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:07.978 [2024-12-06 13:04:54.926902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58741 ] 00:10:08.542 [2024-12-06 13:04:55.384766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.542 [2024-12-06 13:04:55.500784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.475 13:04:56 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.475 00:10:09.475 13:04:56 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:09.475 INFO: shutting down applications... 00:10:09.475 13:04:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:09.475 13:04:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58741 ]] 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58741 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58741 00:10:09.475 13:04:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:09.732 13:04:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:09.732 13:04:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:09.732 13:04:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58741 00:10:09.732 13:04:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:10.296 13:04:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:10.296 13:04:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:10.296 13:04:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58741 00:10:10.296 13:04:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:10.873 13:04:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:10.873 13:04:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:10.873 13:04:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58741 00:10:10.873 13:04:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:11.439 13:04:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:11.439 13:04:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:11.439 13:04:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58741 00:10:11.439 13:04:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:12.051 13:04:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:12.051 13:04:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:12.051 13:04:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58741 00:10:12.051 13:04:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:12.051 13:04:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:12.051 13:04:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:12.051 SPDK target shutdown done 00:10:12.051 13:04:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:12.051 Success 00:10:12.051 13:04:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:12.051 00:10:12.051 real 0m4.118s 00:10:12.051 user 0m3.902s 00:10:12.051 sys 0m0.639s 00:10:12.051 13:04:58 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.051 ************************************ 00:10:12.051 END TEST json_config_extra_key 00:10:12.051 13:04:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:12.051 ************************************ 00:10:12.051 13:04:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:12.051 13:04:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.051 13:04:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.051 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:10:12.051 ************************************ 00:10:12.051 START TEST alias_rpc 00:10:12.051 ************************************ 00:10:12.051 13:04:58 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:12.051 * Looking for test storage... 00:10:12.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:12.051 13:04:58 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.051 13:04:58 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.051 13:04:58 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.051 13:04:58 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.051 13:04:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.052 --rc genhtml_branch_coverage=1 00:10:12.052 --rc genhtml_function_coverage=1 00:10:12.052 --rc genhtml_legend=1 00:10:12.052 --rc geninfo_all_blocks=1 00:10:12.052 --rc geninfo_unexecuted_blocks=1 00:10:12.052 00:10:12.052 ' 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.052 --rc genhtml_branch_coverage=1 00:10:12.052 --rc genhtml_function_coverage=1 00:10:12.052 --rc genhtml_legend=1 00:10:12.052 --rc geninfo_all_blocks=1 00:10:12.052 --rc geninfo_unexecuted_blocks=1 00:10:12.052 00:10:12.052 ' 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.052 --rc genhtml_branch_coverage=1 00:10:12.052 --rc genhtml_function_coverage=1 00:10:12.052 --rc genhtml_legend=1 00:10:12.052 --rc geninfo_all_blocks=1 00:10:12.052 --rc geninfo_unexecuted_blocks=1 00:10:12.052 00:10:12.052 ' 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.052 --rc genhtml_branch_coverage=1 00:10:12.052 --rc genhtml_function_coverage=1 00:10:12.052 --rc genhtml_legend=1 00:10:12.052 --rc geninfo_all_blocks=1 00:10:12.052 --rc geninfo_unexecuted_blocks=1 00:10:12.052 00:10:12.052 ' 00:10:12.052 13:04:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:12.052 13:04:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58846 00:10:12.052 13:04:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58846 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58846 ']' 00:10:12.052 13:04:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.052 13:04:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.309 [2024-12-06 13:04:59.106700] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:12.309 [2024-12-06 13:04:59.106903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58846 ] 00:10:12.309 [2024-12-06 13:04:59.299672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.567 [2024-12-06 13:04:59.464023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.499 13:05:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.499 13:05:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:13.499 13:05:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:13.758 13:05:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58846 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58846 ']' 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58846 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58846 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.758 killing process with pid 58846 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58846' 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 58846 00:10:13.758 13:05:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 58846 00:10:16.297 00:10:16.297 real 0m4.019s 00:10:16.297 user 0m4.181s 00:10:16.297 sys 0m0.655s 00:10:16.297 13:05:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.297 ************************************ 00:10:16.297 13:05:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.297 END TEST alias_rpc 00:10:16.297 ************************************ 00:10:16.297 13:05:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:16.297 13:05:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:16.297 13:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.297 13:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.297 13:05:02 -- common/autotest_common.sh@10 -- # set +x 00:10:16.297 ************************************ 00:10:16.297 START TEST spdkcli_tcp 00:10:16.297 ************************************ 00:10:16.297 13:05:02 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:16.297 * Looking for test storage... 00:10:16.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:16.297 13:05:02 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.297 13:05:02 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.297 13:05:02 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.297 13:05:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.297 --rc genhtml_branch_coverage=1 00:10:16.297 --rc genhtml_function_coverage=1 00:10:16.297 --rc genhtml_legend=1 00:10:16.297 --rc geninfo_all_blocks=1 00:10:16.297 --rc geninfo_unexecuted_blocks=1 00:10:16.297 00:10:16.297 ' 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.297 --rc genhtml_branch_coverage=1 00:10:16.297 --rc genhtml_function_coverage=1 00:10:16.297 --rc genhtml_legend=1 00:10:16.297 --rc geninfo_all_blocks=1 00:10:16.297 --rc geninfo_unexecuted_blocks=1 00:10:16.297 00:10:16.297 ' 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.297 --rc genhtml_branch_coverage=1 00:10:16.297 --rc genhtml_function_coverage=1 00:10:16.297 --rc genhtml_legend=1 00:10:16.297 --rc geninfo_all_blocks=1 00:10:16.297 --rc geninfo_unexecuted_blocks=1 00:10:16.297 00:10:16.297 ' 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.297 --rc genhtml_branch_coverage=1 00:10:16.297 --rc genhtml_function_coverage=1 00:10:16.297 --rc genhtml_legend=1 00:10:16.297 --rc geninfo_all_blocks=1 00:10:16.297 --rc geninfo_unexecuted_blocks=1 00:10:16.297 00:10:16.297 ' 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58953 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58953 00:10:16.297 13:05:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58953 ']' 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.297 13:05:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.297 [2024-12-06 13:05:03.172538] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:16.297 [2024-12-06 13:05:03.172743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:10:16.556 [2024-12-06 13:05:03.360340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:16.556 [2024-12-06 13:05:03.494248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.556 [2024-12-06 13:05:03.494253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.534 13:05:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.534 13:05:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:17.534 13:05:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58970 00:10:17.534 13:05:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:17.534 13:05:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:17.792 [ 00:10:17.792 "bdev_malloc_delete", 00:10:17.792 "bdev_malloc_create", 00:10:17.792 "bdev_null_resize", 00:10:17.792 "bdev_null_delete", 00:10:17.792 "bdev_null_create", 00:10:17.792 "bdev_nvme_cuse_unregister", 00:10:17.792 "bdev_nvme_cuse_register", 00:10:17.792 "bdev_opal_new_user", 00:10:17.792 "bdev_opal_set_lock_state", 00:10:17.792 "bdev_opal_delete", 00:10:17.792 "bdev_opal_get_info", 00:10:17.792 "bdev_opal_create", 00:10:17.792 "bdev_nvme_opal_revert", 00:10:17.792 "bdev_nvme_opal_init", 00:10:17.792 "bdev_nvme_send_cmd", 00:10:17.792 "bdev_nvme_set_keys", 00:10:17.792 "bdev_nvme_get_path_iostat", 00:10:17.792 "bdev_nvme_get_mdns_discovery_info", 00:10:17.792 "bdev_nvme_stop_mdns_discovery", 00:10:17.792 "bdev_nvme_start_mdns_discovery", 00:10:17.792 "bdev_nvme_set_multipath_policy", 00:10:17.792 "bdev_nvme_set_preferred_path", 00:10:17.792 "bdev_nvme_get_io_paths", 00:10:17.792 "bdev_nvme_remove_error_injection", 00:10:17.792 "bdev_nvme_add_error_injection", 00:10:17.792 "bdev_nvme_get_discovery_info", 00:10:17.792 "bdev_nvme_stop_discovery", 00:10:17.792 "bdev_nvme_start_discovery", 00:10:17.792 "bdev_nvme_get_controller_health_info", 00:10:17.792 "bdev_nvme_disable_controller", 00:10:17.792 "bdev_nvme_enable_controller", 00:10:17.792 "bdev_nvme_reset_controller", 00:10:17.792 "bdev_nvme_get_transport_statistics", 00:10:17.792 "bdev_nvme_apply_firmware", 00:10:17.792 "bdev_nvme_detach_controller", 00:10:17.792 "bdev_nvme_get_controllers", 00:10:17.792 "bdev_nvme_attach_controller", 00:10:17.792 "bdev_nvme_set_hotplug", 00:10:17.792 "bdev_nvme_set_options", 00:10:17.792 "bdev_passthru_delete", 00:10:17.792 "bdev_passthru_create", 00:10:17.792 "bdev_lvol_set_parent_bdev", 00:10:17.792 "bdev_lvol_set_parent", 00:10:17.792 "bdev_lvol_check_shallow_copy", 00:10:17.792 "bdev_lvol_start_shallow_copy", 00:10:17.792 "bdev_lvol_grow_lvstore", 00:10:17.792 "bdev_lvol_get_lvols", 00:10:17.792 "bdev_lvol_get_lvstores", 00:10:17.792 "bdev_lvol_delete", 00:10:17.792 "bdev_lvol_set_read_only", 00:10:17.792 "bdev_lvol_resize", 00:10:17.792 "bdev_lvol_decouple_parent", 00:10:17.792 "bdev_lvol_inflate", 00:10:17.792 "bdev_lvol_rename", 00:10:17.792 "bdev_lvol_clone_bdev", 00:10:17.792 "bdev_lvol_clone", 00:10:17.792 "bdev_lvol_snapshot", 00:10:17.792 "bdev_lvol_create", 00:10:17.792 "bdev_lvol_delete_lvstore", 00:10:17.792 "bdev_lvol_rename_lvstore", 00:10:17.792 "bdev_lvol_create_lvstore", 00:10:17.792 "bdev_raid_set_options", 00:10:17.792 "bdev_raid_remove_base_bdev", 00:10:17.792 "bdev_raid_add_base_bdev", 00:10:17.792 "bdev_raid_delete", 00:10:17.792 "bdev_raid_create", 00:10:17.792 "bdev_raid_get_bdevs", 00:10:17.792 "bdev_error_inject_error", 00:10:17.792 "bdev_error_delete", 00:10:17.792 "bdev_error_create", 00:10:17.792 "bdev_split_delete", 00:10:17.792 "bdev_split_create", 00:10:17.792 "bdev_delay_delete", 00:10:17.792 "bdev_delay_create", 00:10:17.792 "bdev_delay_update_latency", 00:10:17.792 "bdev_zone_block_delete", 00:10:17.792 "bdev_zone_block_create", 00:10:17.792 "blobfs_create", 00:10:17.792 "blobfs_detect", 00:10:17.792 "blobfs_set_cache_size", 00:10:17.792 "bdev_xnvme_delete", 00:10:17.792 "bdev_xnvme_create", 00:10:17.792 "bdev_aio_delete", 00:10:17.792 "bdev_aio_rescan", 00:10:17.792 "bdev_aio_create", 00:10:17.792 "bdev_ftl_set_property", 00:10:17.792 "bdev_ftl_get_properties", 00:10:17.792 "bdev_ftl_get_stats", 00:10:17.792 "bdev_ftl_unmap", 00:10:17.792 "bdev_ftl_unload", 00:10:17.792 "bdev_ftl_delete", 00:10:17.792 "bdev_ftl_load", 00:10:17.792 "bdev_ftl_create", 00:10:17.792 "bdev_virtio_attach_controller", 00:10:17.792 "bdev_virtio_scsi_get_devices", 00:10:17.792 "bdev_virtio_detach_controller", 00:10:17.792 "bdev_virtio_blk_set_hotplug", 00:10:17.792 "bdev_iscsi_delete", 00:10:17.792 "bdev_iscsi_create", 00:10:17.792 "bdev_iscsi_set_options", 00:10:17.792 "accel_error_inject_error", 00:10:17.792 "ioat_scan_accel_module", 00:10:17.792 "dsa_scan_accel_module", 00:10:17.792 "iaa_scan_accel_module", 00:10:17.792 "keyring_file_remove_key", 00:10:17.792 "keyring_file_add_key", 00:10:17.792 "keyring_linux_set_options", 00:10:17.792 "fsdev_aio_delete", 00:10:17.792 "fsdev_aio_create", 00:10:17.792 "iscsi_get_histogram", 00:10:17.792 "iscsi_enable_histogram", 00:10:17.792 "iscsi_set_options", 00:10:17.792 "iscsi_get_auth_groups", 00:10:17.792 "iscsi_auth_group_remove_secret", 00:10:17.792 "iscsi_auth_group_add_secret", 00:10:17.792 "iscsi_delete_auth_group", 00:10:17.792 "iscsi_create_auth_group", 00:10:17.792 "iscsi_set_discovery_auth", 00:10:17.792 "iscsi_get_options", 00:10:17.792 "iscsi_target_node_request_logout", 00:10:17.792 "iscsi_target_node_set_redirect", 00:10:17.792 "iscsi_target_node_set_auth", 00:10:17.792 "iscsi_target_node_add_lun", 00:10:17.792 "iscsi_get_stats", 00:10:17.792 "iscsi_get_connections", 00:10:17.792 "iscsi_portal_group_set_auth", 00:10:17.792 "iscsi_start_portal_group", 00:10:17.792 "iscsi_delete_portal_group", 00:10:17.792 "iscsi_create_portal_group", 00:10:17.792 "iscsi_get_portal_groups", 00:10:17.792 "iscsi_delete_target_node", 00:10:17.792 "iscsi_target_node_remove_pg_ig_maps", 00:10:17.792 "iscsi_target_node_add_pg_ig_maps", 00:10:17.792 "iscsi_create_target_node", 00:10:17.792 "iscsi_get_target_nodes", 00:10:17.792 "iscsi_delete_initiator_group", 00:10:17.792 "iscsi_initiator_group_remove_initiators", 00:10:17.792 "iscsi_initiator_group_add_initiators", 00:10:17.792 "iscsi_create_initiator_group", 00:10:17.792 "iscsi_get_initiator_groups", 00:10:17.792 "nvmf_set_crdt", 00:10:17.792 "nvmf_set_config", 00:10:17.792 "nvmf_set_max_subsystems", 00:10:17.792 "nvmf_stop_mdns_prr", 00:10:17.792 "nvmf_publish_mdns_prr", 00:10:17.792 "nvmf_subsystem_get_listeners", 00:10:17.792 "nvmf_subsystem_get_qpairs", 00:10:17.792 "nvmf_subsystem_get_controllers", 00:10:17.792 "nvmf_get_stats", 00:10:17.792 "nvmf_get_transports", 00:10:17.792 "nvmf_create_transport", 00:10:17.792 "nvmf_get_targets", 00:10:17.792 "nvmf_delete_target", 00:10:17.792 "nvmf_create_target", 00:10:17.792 "nvmf_subsystem_allow_any_host", 00:10:17.792 "nvmf_subsystem_set_keys", 00:10:17.792 "nvmf_subsystem_remove_host", 00:10:17.792 "nvmf_subsystem_add_host", 00:10:17.792 "nvmf_ns_remove_host", 00:10:17.792 "nvmf_ns_add_host", 00:10:17.792 "nvmf_subsystem_remove_ns", 00:10:17.792 "nvmf_subsystem_set_ns_ana_group", 00:10:17.792 "nvmf_subsystem_add_ns", 00:10:17.792 "nvmf_subsystem_listener_set_ana_state", 00:10:17.792 "nvmf_discovery_get_referrals", 00:10:17.792 "nvmf_discovery_remove_referral", 00:10:17.792 "nvmf_discovery_add_referral", 00:10:17.792 "nvmf_subsystem_remove_listener", 00:10:17.792 "nvmf_subsystem_add_listener", 00:10:17.792 "nvmf_delete_subsystem", 00:10:17.792 "nvmf_create_subsystem", 00:10:17.792 "nvmf_get_subsystems", 00:10:17.792 "env_dpdk_get_mem_stats", 00:10:17.792 "nbd_get_disks", 00:10:17.792 "nbd_stop_disk", 00:10:17.792 "nbd_start_disk", 00:10:17.792 "ublk_recover_disk", 00:10:17.792 "ublk_get_disks", 00:10:17.792 "ublk_stop_disk", 00:10:17.792 "ublk_start_disk", 00:10:17.792 "ublk_destroy_target", 00:10:17.792 "ublk_create_target", 00:10:17.792 "virtio_blk_create_transport", 00:10:17.792 "virtio_blk_get_transports", 00:10:17.792 "vhost_controller_set_coalescing", 00:10:17.792 "vhost_get_controllers", 00:10:17.792 "vhost_delete_controller", 00:10:17.792 "vhost_create_blk_controller", 00:10:17.792 "vhost_scsi_controller_remove_target", 00:10:17.792 "vhost_scsi_controller_add_target", 00:10:17.792 "vhost_start_scsi_controller", 00:10:17.792 "vhost_create_scsi_controller", 00:10:17.792 "thread_set_cpumask", 00:10:17.792 "scheduler_set_options", 00:10:17.792 "framework_get_governor", 00:10:17.792 "framework_get_scheduler", 00:10:17.792 "framework_set_scheduler", 00:10:17.792 "framework_get_reactors", 00:10:17.792 "thread_get_io_channels", 00:10:17.792 "thread_get_pollers", 00:10:17.792 "thread_get_stats", 00:10:17.792 "framework_monitor_context_switch", 00:10:17.792 "spdk_kill_instance", 00:10:17.792 "log_enable_timestamps", 00:10:17.792 "log_get_flags", 00:10:17.792 "log_clear_flag", 00:10:17.792 "log_set_flag", 00:10:17.792 "log_get_level", 00:10:17.792 "log_set_level", 00:10:17.792 "log_get_print_level", 00:10:17.792 "log_set_print_level", 00:10:17.792 "framework_enable_cpumask_locks", 00:10:17.792 "framework_disable_cpumask_locks", 00:10:17.792 "framework_wait_init", 00:10:17.792 "framework_start_init", 00:10:17.793 "scsi_get_devices", 00:10:17.793 "bdev_get_histogram", 00:10:17.793 "bdev_enable_histogram", 00:10:17.793 "bdev_set_qos_limit", 00:10:17.793 "bdev_set_qd_sampling_period", 00:10:17.793 "bdev_get_bdevs", 00:10:17.793 "bdev_reset_iostat", 00:10:17.793 "bdev_get_iostat", 00:10:17.793 "bdev_examine", 00:10:17.793 "bdev_wait_for_examine", 00:10:17.793 "bdev_set_options", 00:10:17.793 "accel_get_stats", 00:10:17.793 "accel_set_options", 00:10:17.793 "accel_set_driver", 00:10:17.793 "accel_crypto_key_destroy", 00:10:17.793 "accel_crypto_keys_get", 00:10:17.793 "accel_crypto_key_create", 00:10:17.793 "accel_assign_opc", 00:10:17.793 "accel_get_module_info", 00:10:17.793 "accel_get_opc_assignments", 00:10:17.793 "vmd_rescan", 00:10:17.793 "vmd_remove_device", 00:10:17.793 "vmd_enable", 00:10:17.793 "sock_get_default_impl", 00:10:17.793 "sock_set_default_impl", 00:10:17.793 "sock_impl_set_options", 00:10:17.793 "sock_impl_get_options", 00:10:17.793 "iobuf_get_stats", 00:10:17.793 "iobuf_set_options", 00:10:17.793 "keyring_get_keys", 00:10:17.793 "framework_get_pci_devices", 00:10:17.793 "framework_get_config", 00:10:17.793 "framework_get_subsystems", 00:10:17.793 "fsdev_set_opts", 00:10:17.793 "fsdev_get_opts", 00:10:17.793 "trace_get_info", 00:10:17.793 "trace_get_tpoint_group_mask", 00:10:17.793 "trace_disable_tpoint_group", 00:10:17.793 "trace_enable_tpoint_group", 00:10:17.793 "trace_clear_tpoint_mask", 00:10:17.793 "trace_set_tpoint_mask", 00:10:17.793 "notify_get_notifications", 00:10:17.793 "notify_get_types", 00:10:17.793 "spdk_get_version", 00:10:17.793 "rpc_get_methods" 00:10:17.793 ] 00:10:17.793 13:05:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.793 13:05:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:17.793 13:05:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58953 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58953 ']' 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58953 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58953 00:10:17.793 killing process with pid 58953 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58953' 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58953 00:10:17.793 13:05:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58953 00:10:20.322 00:10:20.322 real 0m4.115s 00:10:20.322 user 0m7.457s 00:10:20.322 sys 0m0.689s 00:10:20.322 13:05:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.322 13:05:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.322 ************************************ 00:10:20.322 END TEST spdkcli_tcp 00:10:20.322 ************************************ 00:10:20.322 13:05:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:20.322 13:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.322 13:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.322 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:10:20.322 ************************************ 00:10:20.322 START TEST dpdk_mem_utility 00:10:20.322 ************************************ 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:20.322 * Looking for test storage... 00:10:20.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.322 13:05:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.322 --rc genhtml_branch_coverage=1 00:10:20.322 --rc genhtml_function_coverage=1 00:10:20.322 --rc genhtml_legend=1 00:10:20.322 --rc geninfo_all_blocks=1 00:10:20.322 --rc geninfo_unexecuted_blocks=1 00:10:20.322 00:10:20.322 ' 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.322 --rc genhtml_branch_coverage=1 00:10:20.322 --rc genhtml_function_coverage=1 00:10:20.322 --rc genhtml_legend=1 00:10:20.322 --rc geninfo_all_blocks=1 00:10:20.322 --rc geninfo_unexecuted_blocks=1 00:10:20.322 00:10:20.322 ' 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.322 --rc genhtml_branch_coverage=1 00:10:20.322 --rc genhtml_function_coverage=1 00:10:20.322 --rc genhtml_legend=1 00:10:20.322 --rc geninfo_all_blocks=1 00:10:20.322 --rc geninfo_unexecuted_blocks=1 00:10:20.322 00:10:20.322 ' 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.322 --rc genhtml_branch_coverage=1 00:10:20.322 --rc genhtml_function_coverage=1 00:10:20.322 --rc genhtml_legend=1 00:10:20.322 --rc geninfo_all_blocks=1 00:10:20.322 --rc geninfo_unexecuted_blocks=1 00:10:20.322 00:10:20.322 ' 00:10:20.322 13:05:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:20.322 13:05:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59075 00:10:20.322 13:05:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:20.322 13:05:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59075 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59075 ']' 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.322 13:05:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:20.322 [2024-12-06 13:05:07.320856] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:20.322 [2024-12-06 13:05:07.321027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59075 ] 00:10:20.580 [2024-12-06 13:05:07.510918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.837 [2024-12-06 13:05:07.668906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.785 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.785 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:21.785 13:05:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:21.785 13:05:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:21.785 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.785 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:21.785 { 00:10:21.785 "filename": "/tmp/spdk_mem_dump.txt" 00:10:21.785 } 00:10:21.785 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.785 13:05:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:21.785 DPDK memory size 824.000000 MiB in 1 heap(s) 00:10:21.785 1 heaps totaling size 824.000000 MiB 00:10:21.785 size: 824.000000 MiB heap id: 0 00:10:21.785 end heaps---------- 00:10:21.785 9 mempools totaling size 603.782043 MiB 00:10:21.785 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:21.785 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:21.785 size: 100.555481 MiB name: bdev_io_59075 00:10:21.785 size: 50.003479 MiB name: msgpool_59075 00:10:21.785 size: 36.509338 MiB name: fsdev_io_59075 00:10:21.785 size: 21.763794 MiB name: PDU_Pool 00:10:21.785 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:21.785 size: 4.133484 MiB name: evtpool_59075 00:10:21.785 size: 0.026123 MiB name: Session_Pool 00:10:21.785 end mempools------- 00:10:21.785 6 memzones totaling size 4.142822 MiB 00:10:21.785 size: 1.000366 MiB name: RG_ring_0_59075 00:10:21.785 size: 1.000366 MiB name: RG_ring_1_59075 00:10:21.785 size: 1.000366 MiB name: RG_ring_4_59075 00:10:21.785 size: 1.000366 MiB name: RG_ring_5_59075 00:10:21.785 size: 0.125366 MiB name: RG_ring_2_59075 00:10:21.785 size: 0.015991 MiB name: RG_ring_3_59075 00:10:21.785 end memzones------- 00:10:21.785 13:05:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:21.785 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:10:21.785 list of free elements. size: 16.781372 MiB 00:10:21.785 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:21.785 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:21.785 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:21.785 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:21.785 element at address: 0x200019900040 with size: 0.999939 MiB 00:10:21.785 element at address: 0x200019a00000 with size: 0.999084 MiB 00:10:21.785 element at address: 0x200032600000 with size: 0.994324 MiB 00:10:21.785 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:21.785 element at address: 0x200019200000 with size: 0.959656 MiB 00:10:21.785 element at address: 0x200019d00040 with size: 0.936401 MiB 00:10:21.785 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:21.785 element at address: 0x20001b400000 with size: 0.562683 MiB 00:10:21.785 element at address: 0x200000c00000 with size: 0.489197 MiB 00:10:21.785 element at address: 0x200019600000 with size: 0.487976 MiB 00:10:21.785 element at address: 0x200019e00000 with size: 0.485413 MiB 00:10:21.785 element at address: 0x200012c00000 with size: 0.433472 MiB 00:10:21.785 element at address: 0x200028800000 with size: 0.390442 MiB 00:10:21.785 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:21.785 list of standard malloc elements. size: 199.287720 MiB 00:10:21.785 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:21.785 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:21.785 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:21.785 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:21.785 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:10:21.785 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:21.785 element at address: 0x200019deff40 with size: 0.062683 MiB 00:10:21.785 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:21.785 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:21.785 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:10:21.785 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:21.785 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:21.785 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:21.785 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200019affc40 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200028863f40 with size: 0.000244 MiB 00:10:21.786 element at address: 0x200028864040 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886af80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b080 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b180 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b280 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b380 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b480 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b580 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b780 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b880 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886b980 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886be80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c080 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c180 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c280 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c380 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c480 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c580 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c780 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c880 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886c980 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d080 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d180 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d280 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d380 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d480 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d580 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d780 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d880 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886d980 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886da80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886db80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886de80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886df80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e080 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e180 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e280 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e380 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e480 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e580 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e780 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e880 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886e980 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f080 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f180 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f280 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f380 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f480 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f580 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f680 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f780 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f880 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886f980 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:10:21.786 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:10:21.786 list of memzone associated elements. size: 607.930908 MiB 00:10:21.786 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:10:21.786 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:21.786 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:10:21.786 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:21.786 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:10:21.786 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59075_0 00:10:21.786 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:21.786 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59075_0 00:10:21.786 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:21.786 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59075_0 00:10:21.786 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:10:21.786 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:21.786 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:10:21.786 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:21.786 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:21.786 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59075_0 00:10:21.786 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:21.786 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59075 00:10:21.786 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:21.787 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59075 00:10:21.787 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:10:21.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:21.787 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:10:21.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:21.787 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:21.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:21.787 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:10:21.787 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:21.787 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:21.787 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59075 00:10:21.787 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:21.787 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59075 00:10:21.787 element at address: 0x200019affd40 with size: 1.000549 MiB 00:10:21.787 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59075 00:10:21.787 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:10:21.787 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59075 00:10:21.787 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:21.787 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59075 00:10:21.787 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:21.787 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59075 00:10:21.787 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:10:21.787 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:21.787 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:10:21.787 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:21.787 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:10:21.787 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:21.787 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:21.787 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59075 00:10:21.787 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:21.787 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59075 00:10:21.787 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:10:21.787 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:21.787 element at address: 0x200028864140 with size: 0.023804 MiB 00:10:21.787 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:21.787 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:21.787 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59075 00:10:21.787 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:10:21.787 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:21.787 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:21.787 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59075 00:10:21.787 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:21.787 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59075 00:10:21.787 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:21.787 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59075 00:10:21.787 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:10:21.787 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:21.787 13:05:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:21.787 13:05:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59075 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59075 ']' 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59075 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59075 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.787 killing process with pid 59075 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59075' 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59075 00:10:21.787 13:05:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59075 00:10:24.346 00:10:24.346 real 0m3.831s 00:10:24.346 user 0m3.793s 00:10:24.346 sys 0m0.623s 00:10:24.346 13:05:10 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.346 13:05:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:24.346 ************************************ 00:10:24.346 END TEST dpdk_mem_utility 00:10:24.346 ************************************ 00:10:24.346 13:05:10 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:24.346 13:05:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.346 13:05:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.346 13:05:10 -- common/autotest_common.sh@10 -- # set +x 00:10:24.346 ************************************ 00:10:24.346 START TEST event 00:10:24.346 ************************************ 00:10:24.346 13:05:10 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:24.346 * Looking for test storage... 00:10:24.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:24.346 13:05:10 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:24.346 13:05:10 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:24.346 13:05:10 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:24.346 13:05:11 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:24.346 13:05:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.347 13:05:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.347 13:05:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.347 13:05:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.347 13:05:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.347 13:05:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.347 13:05:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.347 13:05:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.347 13:05:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.347 13:05:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.347 13:05:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.347 13:05:11 event -- scripts/common.sh@344 -- # case "$op" in 00:10:24.347 13:05:11 event -- scripts/common.sh@345 -- # : 1 00:10:24.347 13:05:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.347 13:05:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.347 13:05:11 event -- scripts/common.sh@365 -- # decimal 1 00:10:24.347 13:05:11 event -- scripts/common.sh@353 -- # local d=1 00:10:24.347 13:05:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.347 13:05:11 event -- scripts/common.sh@355 -- # echo 1 00:10:24.347 13:05:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.347 13:05:11 event -- scripts/common.sh@366 -- # decimal 2 00:10:24.347 13:05:11 event -- scripts/common.sh@353 -- # local d=2 00:10:24.347 13:05:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.347 13:05:11 event -- scripts/common.sh@355 -- # echo 2 00:10:24.347 13:05:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.347 13:05:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.347 13:05:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.347 13:05:11 event -- scripts/common.sh@368 -- # return 0 00:10:24.347 13:05:11 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.347 13:05:11 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:24.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.347 --rc genhtml_branch_coverage=1 00:10:24.347 --rc genhtml_function_coverage=1 00:10:24.347 --rc genhtml_legend=1 00:10:24.347 --rc geninfo_all_blocks=1 00:10:24.347 --rc geninfo_unexecuted_blocks=1 00:10:24.347 00:10:24.347 ' 00:10:24.347 13:05:11 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:24.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.347 --rc genhtml_branch_coverage=1 00:10:24.347 --rc genhtml_function_coverage=1 00:10:24.347 --rc genhtml_legend=1 00:10:24.347 --rc geninfo_all_blocks=1 00:10:24.347 --rc geninfo_unexecuted_blocks=1 00:10:24.347 00:10:24.347 ' 00:10:24.347 13:05:11 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:24.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.347 --rc genhtml_branch_coverage=1 00:10:24.347 --rc genhtml_function_coverage=1 00:10:24.347 --rc genhtml_legend=1 00:10:24.347 --rc geninfo_all_blocks=1 00:10:24.347 --rc geninfo_unexecuted_blocks=1 00:10:24.347 00:10:24.347 ' 00:10:24.347 13:05:11 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:24.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.347 --rc genhtml_branch_coverage=1 00:10:24.347 --rc genhtml_function_coverage=1 00:10:24.347 --rc genhtml_legend=1 00:10:24.347 --rc geninfo_all_blocks=1 00:10:24.347 --rc geninfo_unexecuted_blocks=1 00:10:24.347 00:10:24.347 ' 00:10:24.347 13:05:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:24.347 13:05:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:24.347 13:05:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:24.347 13:05:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:24.347 13:05:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.347 13:05:11 event -- common/autotest_common.sh@10 -- # set +x 00:10:24.347 ************************************ 00:10:24.347 START TEST event_perf 00:10:24.347 ************************************ 00:10:24.347 13:05:11 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:24.347 Running I/O for 1 seconds...[2024-12-06 13:05:11.154643] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:24.347 [2024-12-06 13:05:11.154826] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:10:24.347 [2024-12-06 13:05:11.338924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.605 [2024-12-06 13:05:11.455593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.605 [2024-12-06 13:05:11.455768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.605 Running I/O for 1 seconds...[2024-12-06 13:05:11.455913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.605 [2024-12-06 13:05:11.455925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.979 00:10:25.979 lcore 0: 200147 00:10:25.979 lcore 1: 200148 00:10:25.979 lcore 2: 200149 00:10:25.979 lcore 3: 200149 00:10:25.979 done. 00:10:25.979 00:10:25.979 real 0m1.575s 00:10:25.979 user 0m4.337s 00:10:25.979 sys 0m0.118s 00:10:25.979 ************************************ 00:10:25.979 END TEST event_perf 00:10:25.979 ************************************ 00:10:25.979 13:05:12 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.979 13:05:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:25.979 13:05:12 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:25.979 13:05:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:25.979 13:05:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.979 13:05:12 event -- common/autotest_common.sh@10 -- # set +x 00:10:25.979 ************************************ 00:10:25.979 START TEST event_reactor 00:10:25.979 ************************************ 00:10:25.979 13:05:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:25.979 [2024-12-06 13:05:12.767362] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:25.979 [2024-12-06 13:05:12.767522] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:10:25.979 [2024-12-06 13:05:12.940254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.237 [2024-12-06 13:05:13.070663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.640 test_start 00:10:27.640 oneshot 00:10:27.640 tick 100 00:10:27.640 tick 100 00:10:27.640 tick 250 00:10:27.640 tick 100 00:10:27.640 tick 100 00:10:27.640 tick 100 00:10:27.640 tick 250 00:10:27.640 tick 500 00:10:27.640 tick 100 00:10:27.640 tick 100 00:10:27.640 tick 250 00:10:27.640 tick 100 00:10:27.640 tick 100 00:10:27.640 test_end 00:10:27.640 00:10:27.640 real 0m1.570s 00:10:27.640 user 0m1.377s 00:10:27.640 sys 0m0.083s 00:10:27.640 13:05:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.640 13:05:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:27.640 ************************************ 00:10:27.640 END TEST event_reactor 00:10:27.640 ************************************ 00:10:27.640 13:05:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:27.640 13:05:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.640 13:05:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.640 13:05:14 event -- common/autotest_common.sh@10 -- # set +x 00:10:27.640 ************************************ 00:10:27.640 START TEST event_reactor_perf 00:10:27.640 ************************************ 00:10:27.640 13:05:14 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:27.641 [2024-12-06 13:05:14.403680] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:27.641 [2024-12-06 13:05:14.403854] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:10:27.641 [2024-12-06 13:05:14.588380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.898 [2024-12-06 13:05:14.714770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.272 test_start 00:10:29.272 test_end 00:10:29.272 Performance: 293920 events per second 00:10:29.272 00:10:29.272 real 0m1.583s 00:10:29.272 user 0m1.362s 00:10:29.272 sys 0m0.110s 00:10:29.272 13:05:15 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.272 13:05:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:29.272 ************************************ 00:10:29.272 END TEST event_reactor_perf 00:10:29.272 ************************************ 00:10:29.272 13:05:15 event -- event/event.sh@49 -- # uname -s 00:10:29.272 13:05:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:29.272 13:05:15 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:29.272 13:05:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.272 13:05:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.272 13:05:15 event -- common/autotest_common.sh@10 -- # set +x 00:10:29.272 ************************************ 00:10:29.272 START TEST event_scheduler 00:10:29.272 ************************************ 00:10:29.272 13:05:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:29.272 * Looking for test storage... 00:10:29.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:29.272 13:05:16 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.272 13:05:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.272 13:05:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.272 13:05:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.272 13:05:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.273 13:05:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.273 --rc genhtml_branch_coverage=1 00:10:29.273 --rc genhtml_function_coverage=1 00:10:29.273 --rc genhtml_legend=1 00:10:29.273 --rc geninfo_all_blocks=1 00:10:29.273 --rc geninfo_unexecuted_blocks=1 00:10:29.273 00:10:29.273 ' 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.273 --rc genhtml_branch_coverage=1 00:10:29.273 --rc genhtml_function_coverage=1 00:10:29.273 --rc genhtml_legend=1 00:10:29.273 --rc geninfo_all_blocks=1 00:10:29.273 --rc geninfo_unexecuted_blocks=1 00:10:29.273 00:10:29.273 ' 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.273 --rc genhtml_branch_coverage=1 00:10:29.273 --rc genhtml_function_coverage=1 00:10:29.273 --rc genhtml_legend=1 00:10:29.273 --rc geninfo_all_blocks=1 00:10:29.273 --rc geninfo_unexecuted_blocks=1 00:10:29.273 00:10:29.273 ' 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.273 --rc genhtml_branch_coverage=1 00:10:29.273 --rc genhtml_function_coverage=1 00:10:29.273 --rc genhtml_legend=1 00:10:29.273 --rc geninfo_all_blocks=1 00:10:29.273 --rc geninfo_unexecuted_blocks=1 00:10:29.273 00:10:29.273 ' 00:10:29.273 13:05:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:29.273 13:05:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59330 00:10:29.273 13:05:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:29.273 13:05:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59330 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59330 ']' 00:10:29.273 13:05:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.273 13:05:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:29.531 [2024-12-06 13:05:16.298814] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:29.531 [2024-12-06 13:05:16.299563] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59330 ] 00:10:29.531 [2024-12-06 13:05:16.494554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.789 [2024-12-06 13:05:16.653711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.789 [2024-12-06 13:05:16.653819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.789 [2024-12-06 13:05:16.653956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.789 [2024-12-06 13:05:16.653966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.353 13:05:17 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.353 13:05:17 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:30.353 13:05:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:30.353 13:05:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.353 13:05:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:30.353 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:30.353 POWER: Cannot set governor of lcore 0 to userspace 00:10:30.353 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:30.353 POWER: Cannot set governor of lcore 0 to performance 00:10:30.353 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:30.353 POWER: Cannot set governor of lcore 0 to userspace 00:10:30.353 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:30.353 POWER: Cannot set governor of lcore 0 to userspace 00:10:30.353 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:30.353 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:30.353 POWER: Unable to set Power Management Environment for lcore 0 00:10:30.353 [2024-12-06 13:05:17.292099] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:30.353 [2024-12-06 13:05:17.292146] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:30.353 [2024-12-06 13:05:17.292164] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:30.353 [2024-12-06 13:05:17.292210] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:30.353 [2024-12-06 13:05:17.292230] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:30.353 [2024-12-06 13:05:17.292244] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:30.353 13:05:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.353 13:05:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:30.353 13:05:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.353 13:05:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 [2024-12-06 13:05:17.627156] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:30.920 13:05:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.920 13:05:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:30.920 13:05:17 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.920 13:05:17 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.920 13:05:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 ************************************ 00:10:30.920 START TEST scheduler_create_thread 00:10:30.920 ************************************ 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 2 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 3 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 4 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 5 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 6 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 7 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 8 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 9 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 10 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:05:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:32.306 13:05:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.306 13:05:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:32.306 13:05:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:32.306 13:05:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.306 13:05:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.680 ************************************ 00:10:33.680 END TEST scheduler_create_thread 00:10:33.680 ************************************ 00:10:33.680 13:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.680 00:10:33.680 real 0m2.618s 00:10:33.680 user 0m0.017s 00:10:33.680 sys 0m0.007s 00:10:33.680 13:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.680 13:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.680 13:05:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:33.680 13:05:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59330 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59330 ']' 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59330 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59330 00:10:33.680 killing process with pid 59330 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59330' 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59330 00:10:33.680 13:05:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59330 00:10:33.939 [2024-12-06 13:05:20.738168] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:34.876 00:10:34.876 real 0m5.857s 00:10:34.876 user 0m10.231s 00:10:34.876 sys 0m0.513s 00:10:34.876 13:05:21 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.876 13:05:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:34.876 ************************************ 00:10:34.876 END TEST event_scheduler 00:10:34.876 ************************************ 00:10:35.134 13:05:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:35.135 13:05:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:35.135 13:05:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.135 13:05:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.135 13:05:21 event -- common/autotest_common.sh@10 -- # set +x 00:10:35.135 ************************************ 00:10:35.135 START TEST app_repeat 00:10:35.135 ************************************ 00:10:35.135 13:05:21 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59441 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59441' 00:10:35.135 Process app_repeat pid: 59441 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:35.135 spdk_app_start Round 0 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:35.135 13:05:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:10:35.135 13:05:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:10:35.135 13:05:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:35.135 13:05:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:35.135 13:05:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:35.135 13:05:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.135 13:05:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:35.135 [2024-12-06 13:05:21.976639] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:35.135 [2024-12-06 13:05:21.976771] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59441 ] 00:10:35.393 [2024-12-06 13:05:22.157823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:35.393 [2024-12-06 13:05:22.301200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.393 [2024-12-06 13:05:22.301305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.345 13:05:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.345 13:05:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:36.345 13:05:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:36.603 Malloc0 00:10:36.603 13:05:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:36.861 Malloc1 00:10:36.861 13:05:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:36.861 13:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:37.120 /dev/nbd0 00:10:37.379 13:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:37.379 13:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:37.379 1+0 records in 00:10:37.379 1+0 records out 00:10:37.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307221 s, 13.3 MB/s 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.379 13:05:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:37.379 13:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.379 13:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:37.379 13:05:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:37.638 /dev/nbd1 00:10:37.638 13:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:37.638 13:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:37.638 1+0 records in 00:10:37.638 1+0 records out 00:10:37.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341708 s, 12.0 MB/s 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.638 13:05:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:37.638 13:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.638 13:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:37.638 13:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.638 13:05:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.638 13:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:37.896 { 00:10:37.896 "nbd_device": "/dev/nbd0", 00:10:37.896 "bdev_name": "Malloc0" 00:10:37.896 }, 00:10:37.896 { 00:10:37.896 "nbd_device": "/dev/nbd1", 00:10:37.896 "bdev_name": "Malloc1" 00:10:37.896 } 00:10:37.896 ]' 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:37.896 { 00:10:37.896 "nbd_device": "/dev/nbd0", 00:10:37.896 "bdev_name": "Malloc0" 00:10:37.896 }, 00:10:37.896 { 00:10:37.896 "nbd_device": "/dev/nbd1", 00:10:37.896 "bdev_name": "Malloc1" 00:10:37.896 } 00:10:37.896 ]' 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:37.896 /dev/nbd1' 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:37.896 /dev/nbd1' 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:37.896 256+0 records in 00:10:37.896 256+0 records out 00:10:37.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00776139 s, 135 MB/s 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:37.896 256+0 records in 00:10:37.896 256+0 records out 00:10:37.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305757 s, 34.3 MB/s 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.896 13:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:37.896 256+0 records in 00:10:37.896 256+0 records out 00:10:37.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349909 s, 30.0 MB/s 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.154 13:05:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.411 13:05:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.669 13:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:38.927 13:05:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:38.927 13:05:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:39.494 13:05:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:40.433 [2024-12-06 13:05:27.416852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:40.722 [2024-12-06 13:05:27.539290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.722 [2024-12-06 13:05:27.539304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.722 [2024-12-06 13:05:27.718637] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:40.722 [2024-12-06 13:05:27.718764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:42.623 spdk_app_start Round 1 00:10:42.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:42.623 13:05:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:42.623 13:05:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:42.623 13:05:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.623 13:05:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:42.623 13:05:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:43.190 Malloc0 00:10:43.190 13:05:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:43.449 Malloc1 00:10:43.449 13:05:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.449 13:05:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:43.708 /dev/nbd0 00:10:43.708 13:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:43.708 13:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:43.708 1+0 records in 00:10:43.708 1+0 records out 00:10:43.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310173 s, 13.2 MB/s 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:43.708 13:05:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:43.708 13:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.708 13:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.708 13:05:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:43.966 /dev/nbd1 00:10:43.966 13:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:43.966 13:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:43.966 1+0 records in 00:10:43.966 1+0 records out 00:10:43.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371392 s, 11.0 MB/s 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:43.966 13:05:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:43.966 13:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.966 13:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.967 13:05:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.967 13:05:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.967 13:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:44.539 { 00:10:44.539 "nbd_device": "/dev/nbd0", 00:10:44.539 "bdev_name": "Malloc0" 00:10:44.539 }, 00:10:44.539 { 00:10:44.539 "nbd_device": "/dev/nbd1", 00:10:44.539 "bdev_name": "Malloc1" 00:10:44.539 } 00:10:44.539 ]' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:44.539 { 00:10:44.539 "nbd_device": "/dev/nbd0", 00:10:44.539 "bdev_name": "Malloc0" 00:10:44.539 }, 00:10:44.539 { 00:10:44.539 "nbd_device": "/dev/nbd1", 00:10:44.539 "bdev_name": "Malloc1" 00:10:44.539 } 00:10:44.539 ]' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:44.539 /dev/nbd1' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:44.539 /dev/nbd1' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:44.539 256+0 records in 00:10:44.539 256+0 records out 00:10:44.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00704137 s, 149 MB/s 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:44.539 256+0 records in 00:10:44.539 256+0 records out 00:10:44.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299911 s, 35.0 MB/s 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:44.539 256+0 records in 00:10:44.539 256+0 records out 00:10:44.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304472 s, 34.4 MB/s 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.539 13:05:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.796 13:05:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.053 13:05:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:45.707 13:05:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:45.707 13:05:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:45.965 13:05:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:46.896 [2024-12-06 13:05:33.867101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:47.154 [2024-12-06 13:05:34.001254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.154 [2024-12-06 13:05:34.001257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.412 [2024-12-06 13:05:34.202117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:47.412 [2024-12-06 13:05:34.202287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:49.309 spdk_app_start Round 2 00:10:49.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:49.309 13:05:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:49.309 13:05:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:49.309 13:05:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:10:49.309 13:05:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:10:49.309 13:05:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:49.309 13:05:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.309 13:05:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:49.309 13:05:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.309 13:05:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:49.309 13:05:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.309 13:05:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:49.309 13:05:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:49.567 Malloc0 00:10:49.567 13:05:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:50.132 Malloc1 00:10:50.132 13:05:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.132 13:05:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:50.392 /dev/nbd0 00:10:50.392 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:50.392 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:50.392 1+0 records in 00:10:50.392 1+0 records out 00:10:50.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376102 s, 10.9 MB/s 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:50.392 13:05:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:50.392 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:50.392 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.392 13:05:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:50.650 /dev/nbd1 00:10:50.650 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:50.650 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:50.650 1+0 records in 00:10:50.650 1+0 records out 00:10:50.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401972 s, 10.2 MB/s 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:50.650 13:05:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:50.650 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:50.650 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.650 13:05:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.650 13:05:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.650 13:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:51.236 { 00:10:51.236 "nbd_device": "/dev/nbd0", 00:10:51.236 "bdev_name": "Malloc0" 00:10:51.236 }, 00:10:51.236 { 00:10:51.236 "nbd_device": "/dev/nbd1", 00:10:51.236 "bdev_name": "Malloc1" 00:10:51.236 } 00:10:51.236 ]' 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:51.236 { 00:10:51.236 "nbd_device": "/dev/nbd0", 00:10:51.236 "bdev_name": "Malloc0" 00:10:51.236 }, 00:10:51.236 { 00:10:51.236 "nbd_device": "/dev/nbd1", 00:10:51.236 "bdev_name": "Malloc1" 00:10:51.236 } 00:10:51.236 ]' 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:51.236 /dev/nbd1' 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:51.236 /dev/nbd1' 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:51.236 13:05:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:51.236 256+0 records in 00:10:51.236 256+0 records out 00:10:51.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00687321 s, 153 MB/s 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:51.236 256+0 records in 00:10:51.236 256+0 records out 00:10:51.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027295 s, 38.4 MB/s 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:51.236 256+0 records in 00:10:51.236 256+0 records out 00:10:51.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315793 s, 33.2 MB/s 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.236 13:05:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:51.493 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:51.493 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:51.493 13:05:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:51.493 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.493 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.493 13:05:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:51.494 13:05:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:51.494 13:05:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.494 13:05:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.494 13:05:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.751 13:05:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:52.316 13:05:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:52.316 13:05:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:52.883 13:05:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:53.817 [2024-12-06 13:05:40.731336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.078 [2024-12-06 13:05:40.865363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.078 [2024-12-06 13:05:40.865376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.078 [2024-12-06 13:05:41.060958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:54.078 [2024-12-06 13:05:41.061049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:55.977 13:05:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:10:55.977 13:05:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:55.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:55.978 13:05:42 event.app_repeat -- event/event.sh@39 -- # killprocess 59441 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59441 ']' 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59441 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59441 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.978 killing process with pid 59441 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59441' 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59441 00:10:55.978 13:05:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59441 00:10:56.913 spdk_app_start is called in Round 0. 00:10:56.913 Shutdown signal received, stop current app iteration 00:10:56.913 Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 reinitialization... 00:10:56.913 spdk_app_start is called in Round 1. 00:10:56.913 Shutdown signal received, stop current app iteration 00:10:56.913 Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 reinitialization... 00:10:56.913 spdk_app_start is called in Round 2. 00:10:56.913 Shutdown signal received, stop current app iteration 00:10:56.913 Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 reinitialization... 00:10:56.913 spdk_app_start is called in Round 3. 00:10:56.913 Shutdown signal received, stop current app iteration 00:10:57.170 13:05:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:57.170 13:05:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:57.170 00:10:57.170 real 0m22.028s 00:10:57.170 user 0m48.900s 00:10:57.170 sys 0m3.163s 00:10:57.171 13:05:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.171 13:05:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:57.171 ************************************ 00:10:57.171 END TEST app_repeat 00:10:57.171 ************************************ 00:10:57.171 13:05:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:57.171 13:05:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:57.171 13:05:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:57.171 13:05:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.171 13:05:43 event -- common/autotest_common.sh@10 -- # set +x 00:10:57.171 ************************************ 00:10:57.171 START TEST cpu_locks 00:10:57.171 ************************************ 00:10:57.171 13:05:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:57.171 * Looking for test storage... 00:10:57.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.171 13:05:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.171 --rc genhtml_branch_coverage=1 00:10:57.171 --rc genhtml_function_coverage=1 00:10:57.171 --rc genhtml_legend=1 00:10:57.171 --rc geninfo_all_blocks=1 00:10:57.171 --rc geninfo_unexecuted_blocks=1 00:10:57.171 00:10:57.171 ' 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.171 --rc genhtml_branch_coverage=1 00:10:57.171 --rc genhtml_function_coverage=1 00:10:57.171 --rc genhtml_legend=1 00:10:57.171 --rc geninfo_all_blocks=1 00:10:57.171 --rc geninfo_unexecuted_blocks=1 00:10:57.171 00:10:57.171 ' 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.171 --rc genhtml_branch_coverage=1 00:10:57.171 --rc genhtml_function_coverage=1 00:10:57.171 --rc genhtml_legend=1 00:10:57.171 --rc geninfo_all_blocks=1 00:10:57.171 --rc geninfo_unexecuted_blocks=1 00:10:57.171 00:10:57.171 ' 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.171 --rc genhtml_branch_coverage=1 00:10:57.171 --rc genhtml_function_coverage=1 00:10:57.171 --rc genhtml_legend=1 00:10:57.171 --rc geninfo_all_blocks=1 00:10:57.171 --rc geninfo_unexecuted_blocks=1 00:10:57.171 00:10:57.171 ' 00:10:57.171 13:05:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:57.171 13:05:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:57.171 13:05:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:57.171 13:05:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.171 13:05:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:57.428 ************************************ 00:10:57.428 START TEST default_locks 00:10:57.428 ************************************ 00:10:57.428 13:05:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59922 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59922 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59922 ']' 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.429 13:05:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:57.429 [2024-12-06 13:05:44.321786] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:57.429 [2024-12-06 13:05:44.321972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:10:57.685 [2024-12-06 13:05:44.507679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.685 [2024-12-06 13:05:44.638177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.613 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.613 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:58.613 13:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59922 00:10:58.613 13:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59922 00:10:58.613 13:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59922 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59922 ']' 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59922 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59922 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.176 killing process with pid 59922 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59922' 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59922 00:10:59.176 13:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59922 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59922 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59922 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59922 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59922 ']' 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.704 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.705 ERROR: process (pid: 59922) is no longer running 00:11:01.705 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59922) - No such process 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:01.705 ************************************ 00:11:01.705 END TEST default_locks 00:11:01.705 ************************************ 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:01.705 00:11:01.705 real 0m4.027s 00:11:01.705 user 0m3.989s 00:11:01.705 sys 0m0.772s 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.705 13:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.705 13:05:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:01.705 13:05:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.705 13:05:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.705 13:05:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.705 ************************************ 00:11:01.705 START TEST default_locks_via_rpc 00:11:01.705 ************************************ 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59997 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59997 00:11:01.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59997 ']' 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.705 13:05:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.705 [2024-12-06 13:05:48.408256] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:01.705 [2024-12-06 13:05:48.409381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59997 ] 00:11:01.705 [2024-12-06 13:05:48.605378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.963 [2024-12-06 13:05:48.742471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59997 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59997 00:11:02.897 13:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59997 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59997 ']' 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59997 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59997 00:11:03.156 killing process with pid 59997 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59997' 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59997 00:11:03.156 13:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59997 00:11:05.684 ************************************ 00:11:05.684 END TEST default_locks_via_rpc 00:11:05.684 ************************************ 00:11:05.684 00:11:05.684 real 0m4.177s 00:11:05.684 user 0m4.247s 00:11:05.684 sys 0m0.759s 00:11:05.684 13:05:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.684 13:05:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.684 13:05:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:05.684 13:05:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.684 13:05:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.684 13:05:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:05.684 ************************************ 00:11:05.684 START TEST non_locking_app_on_locked_coremask 00:11:05.684 ************************************ 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:05.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60071 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60071 /var/tmp/spdk.sock 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60071 ']' 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.684 13:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:05.684 [2024-12-06 13:05:52.637121] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:05.684 [2024-12-06 13:05:52.637587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60071 ] 00:11:05.943 [2024-12-06 13:05:52.825621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.202 [2024-12-06 13:05:52.959009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60087 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60087 /var/tmp/spdk2.sock 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60087 ']' 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.136 13:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:07.136 [2024-12-06 13:05:53.960384] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:07.136 [2024-12-06 13:05:53.960813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60087 ] 00:11:07.394 [2024-12-06 13:05:54.171192] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:07.394 [2024-12-06 13:05:54.171312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.652 [2024-12-06 13:05:54.441807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.187 13:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.187 13:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:10.187 13:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60071 00:11:10.187 13:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60071 00:11:10.187 13:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:10.767 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60071 00:11:10.767 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60071 ']' 00:11:10.767 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60071 00:11:10.767 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:10.767 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.767 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60071 00:11:10.767 killing process with pid 60071 00:11:10.767 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.768 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.768 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60071' 00:11:10.768 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60071 00:11:10.768 13:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60071 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60087 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60087 ']' 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60087 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60087 00:11:16.025 killing process with pid 60087 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60087' 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60087 00:11:16.025 13:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60087 00:11:17.401 00:11:17.401 real 0m11.800s 00:11:17.401 user 0m12.325s 00:11:17.401 sys 0m1.544s 00:11:17.401 13:06:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.401 ************************************ 00:11:17.401 END TEST non_locking_app_on_locked_coremask 00:11:17.401 ************************************ 00:11:17.401 13:06:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:17.401 13:06:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:17.401 13:06:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.401 13:06:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.401 13:06:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:17.401 ************************************ 00:11:17.401 START TEST locking_app_on_unlocked_coremask 00:11:17.401 ************************************ 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:17.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60242 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60242 /var/tmp/spdk.sock 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60242 ']' 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.401 13:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:17.658 [2024-12-06 13:06:04.489017] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:17.658 [2024-12-06 13:06:04.489428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60242 ] 00:11:17.658 [2024-12-06 13:06:04.672302] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:17.916 [2024-12-06 13:06:04.672592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.916 [2024-12-06 13:06:04.801827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60263 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60263 /var/tmp/spdk2.sock 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60263 ']' 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:18.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.849 13:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:18.849 [2024-12-06 13:06:05.761695] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:18.849 [2024-12-06 13:06:05.762575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60263 ] 00:11:19.106 [2024-12-06 13:06:05.962036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.366 [2024-12-06 13:06:06.225363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.896 13:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.896 13:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:21.896 13:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60263 00:11:21.896 13:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:21.896 13:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60263 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60242 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60242 ']' 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60242 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60242 00:11:22.461 killing process with pid 60242 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60242' 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60242 00:11:22.461 13:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60242 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60263 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60263 ']' 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60263 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60263 00:11:27.725 killing process with pid 60263 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60263' 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60263 00:11:27.725 13:06:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60263 00:11:29.116 00:11:29.116 real 0m11.718s 00:11:29.116 user 0m12.295s 00:11:29.116 sys 0m1.474s 00:11:29.116 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.116 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:29.116 ************************************ 00:11:29.116 END TEST locking_app_on_unlocked_coremask 00:11:29.116 ************************************ 00:11:29.116 13:06:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:29.116 13:06:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.116 13:06:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.116 13:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.116 ************************************ 00:11:29.116 START TEST locking_app_on_locked_coremask 00:11:29.116 ************************************ 00:11:29.116 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:29.116 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60411 00:11:29.116 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:29.117 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60411 /var/tmp/spdk.sock 00:11:29.117 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60411 ']' 00:11:29.117 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.117 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.374 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.374 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.374 13:06:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:29.374 [2024-12-06 13:06:16.258295] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:29.374 [2024-12-06 13:06:16.258771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60411 ] 00:11:29.632 [2024-12-06 13:06:16.441815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.632 [2024-12-06 13:06:16.571995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60433 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60433 /var/tmp/spdk2.sock 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60433 /var/tmp/spdk2.sock 00:11:30.566 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60433 /var/tmp/spdk2.sock 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60433 ']' 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:30.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.567 13:06:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:30.567 [2024-12-06 13:06:17.575850] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:30.567 [2024-12-06 13:06:17.576306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:11:30.825 [2024-12-06 13:06:17.782007] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60411 has claimed it. 00:11:30.825 [2024-12-06 13:06:17.782100] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:31.391 ERROR: process (pid: 60433) is no longer running 00:11:31.391 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60433) - No such process 00:11:31.391 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.391 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:31.391 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:31.391 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.392 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:31.392 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.392 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60411 00:11:31.392 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60411 00:11:31.392 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60411 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60411 ']' 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60411 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60411 00:11:31.958 killing process with pid 60411 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60411' 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60411 00:11:31.958 13:06:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60411 00:11:34.504 00:11:34.504 real 0m4.833s 00:11:34.504 user 0m5.162s 00:11:34.504 sys 0m0.914s 00:11:34.504 ************************************ 00:11:34.504 END TEST locking_app_on_locked_coremask 00:11:34.504 ************************************ 00:11:34.504 13:06:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.504 13:06:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.504 13:06:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:34.504 13:06:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.504 13:06:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.504 13:06:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.504 ************************************ 00:11:34.504 START TEST locking_overlapped_coremask 00:11:34.504 ************************************ 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60497 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60497 /var/tmp/spdk.sock 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60497 ']' 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.504 13:06:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.504 [2024-12-06 13:06:21.143676] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:34.504 [2024-12-06 13:06:21.143864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60497 ] 00:11:34.504 [2024-12-06 13:06:21.327602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:34.504 [2024-12-06 13:06:21.464011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.504 [2024-12-06 13:06:21.464157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.504 [2024-12-06 13:06:21.464190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60520 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60520 /var/tmp/spdk2.sock 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60520 /var/tmp/spdk2.sock 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60520 /var/tmp/spdk2.sock 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60520 ']' 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:35.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.439 13:06:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 [2024-12-06 13:06:22.476622] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:35.697 [2024-12-06 13:06:22.477183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60520 ] 00:11:35.697 [2024-12-06 13:06:22.693683] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60497 has claimed it. 00:11:35.697 [2024-12-06 13:06:22.693760] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:36.262 ERROR: process (pid: 60520) is no longer running 00:11:36.262 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60520) - No such process 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60497 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60497 ']' 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60497 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60497 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60497' 00:11:36.262 killing process with pid 60497 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60497 00:11:36.262 13:06:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60497 00:11:38.831 00:11:38.831 real 0m4.391s 00:11:38.831 user 0m11.947s 00:11:38.831 sys 0m0.716s 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:38.831 ************************************ 00:11:38.831 END TEST locking_overlapped_coremask 00:11:38.831 ************************************ 00:11:38.831 13:06:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:38.831 13:06:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.831 13:06:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.831 13:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:38.831 ************************************ 00:11:38.831 START TEST locking_overlapped_coremask_via_rpc 00:11:38.831 ************************************ 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:38.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60584 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60584 /var/tmp/spdk.sock 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60584 ']' 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.831 13:06:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.831 [2024-12-06 13:06:25.574589] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:38.831 [2024-12-06 13:06:25.575582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:11:38.831 [2024-12-06 13:06:25.765102] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:38.831 [2024-12-06 13:06:25.765171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.088 [2024-12-06 13:06:25.898309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.088 [2024-12-06 13:06:25.898371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.088 [2024-12-06 13:06:25.898380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60608 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60608 /var/tmp/spdk2.sock 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60608 ']' 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.018 13:06:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.018 [2024-12-06 13:06:26.886827] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:40.018 [2024-12-06 13:06:26.887240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:11:40.276 [2024-12-06 13:06:27.094870] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:40.276 [2024-12-06 13:06:27.094945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.534 [2024-12-06 13:06:27.366285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.534 [2024-12-06 13:06:27.366414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.534 [2024-12-06 13:06:27.366429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.063 [2024-12-06 13:06:29.711327] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60584 has claimed it. 00:11:43.063 request: 00:11:43.063 { 00:11:43.063 "method": "framework_enable_cpumask_locks", 00:11:43.063 "req_id": 1 00:11:43.063 } 00:11:43.063 Got JSON-RPC error response 00:11:43.063 response: 00:11:43.063 { 00:11:43.063 "code": -32603, 00:11:43.063 "message": "Failed to claim CPU core: 2" 00:11:43.063 } 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.063 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60584 /var/tmp/spdk.sock 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60584 ']' 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.064 13:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:43.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60608 /var/tmp/spdk2.sock 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60608 ']' 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.064 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:43.630 00:11:43.630 real 0m4.923s 00:11:43.630 user 0m1.926s 00:11:43.630 sys 0m0.254s 00:11:43.630 ************************************ 00:11:43.630 END TEST locking_overlapped_coremask_via_rpc 00:11:43.630 ************************************ 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.630 13:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.630 13:06:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:43.630 13:06:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60584 ]] 00:11:43.630 13:06:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60584 00:11:43.630 13:06:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60584 ']' 00:11:43.630 13:06:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60584 00:11:43.630 13:06:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:43.630 13:06:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.631 13:06:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60584 00:11:43.631 13:06:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.631 13:06:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.631 13:06:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60584' 00:11:43.631 killing process with pid 60584 00:11:43.631 13:06:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60584 00:11:43.631 13:06:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60584 00:11:46.161 13:06:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60608 ]] 00:11:46.161 13:06:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60608 00:11:46.161 13:06:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60608 ']' 00:11:46.161 13:06:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60608 00:11:46.161 13:06:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:46.162 13:06:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.162 13:06:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60608 00:11:46.162 13:06:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:46.162 killing process with pid 60608 00:11:46.162 13:06:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:46.162 13:06:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60608' 00:11:46.162 13:06:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60608 00:11:46.162 13:06:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60608 00:11:48.060 13:06:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:48.060 Process with pid 60584 is not found 00:11:48.060 Process with pid 60608 is not found 00:11:48.060 13:06:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:48.060 13:06:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60584 ]] 00:11:48.060 13:06:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60584 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60584 ']' 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60584 00:11:48.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60584) - No such process 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60584 is not found' 00:11:48.060 13:06:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60608 ]] 00:11:48.060 13:06:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60608 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60608 ']' 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60608 00:11:48.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60608) - No such process 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60608 is not found' 00:11:48.060 13:06:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:48.060 ************************************ 00:11:48.060 END TEST cpu_locks 00:11:48.060 ************************************ 00:11:48.060 00:11:48.060 real 0m51.002s 00:11:48.060 user 1m28.594s 00:11:48.060 sys 0m7.658s 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.060 13:06:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:48.060 ************************************ 00:11:48.060 END TEST event 00:11:48.060 ************************************ 00:11:48.060 00:11:48.060 real 1m24.138s 00:11:48.060 user 2m35.006s 00:11:48.060 sys 0m11.936s 00:11:48.060 13:06:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.060 13:06:35 event -- common/autotest_common.sh@10 -- # set +x 00:11:48.341 13:06:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:48.341 13:06:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.341 13:06:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.341 13:06:35 -- common/autotest_common.sh@10 -- # set +x 00:11:48.341 ************************************ 00:11:48.341 START TEST thread 00:11:48.341 ************************************ 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:48.341 * Looking for test storage... 00:11:48.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.341 13:06:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.341 13:06:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.341 13:06:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.341 13:06:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.341 13:06:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.341 13:06:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.341 13:06:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.341 13:06:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.341 13:06:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.341 13:06:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.341 13:06:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.341 13:06:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:48.341 13:06:35 thread -- scripts/common.sh@345 -- # : 1 00:11:48.341 13:06:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.341 13:06:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.341 13:06:35 thread -- scripts/common.sh@365 -- # decimal 1 00:11:48.341 13:06:35 thread -- scripts/common.sh@353 -- # local d=1 00:11:48.341 13:06:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.341 13:06:35 thread -- scripts/common.sh@355 -- # echo 1 00:11:48.341 13:06:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.341 13:06:35 thread -- scripts/common.sh@366 -- # decimal 2 00:11:48.341 13:06:35 thread -- scripts/common.sh@353 -- # local d=2 00:11:48.341 13:06:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.341 13:06:35 thread -- scripts/common.sh@355 -- # echo 2 00:11:48.341 13:06:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.341 13:06:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.341 13:06:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.341 13:06:35 thread -- scripts/common.sh@368 -- # return 0 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 13:06:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.341 13:06:35 thread -- common/autotest_common.sh@10 -- # set +x 00:11:48.341 ************************************ 00:11:48.341 START TEST thread_poller_perf 00:11:48.341 ************************************ 00:11:48.341 13:06:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:48.341 [2024-12-06 13:06:35.309466] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:48.341 [2024-12-06 13:06:35.310420] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60803 ] 00:11:48.598 [2024-12-06 13:06:35.493361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.855 [2024-12-06 13:06:35.668839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.855 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:50.226 [2024-12-06T13:06:37.242Z] ====================================== 00:11:50.226 [2024-12-06T13:06:37.242Z] busy:2208248513 (cyc) 00:11:50.226 [2024-12-06T13:06:37.242Z] total_run_count: 303000 00:11:50.226 [2024-12-06T13:06:37.242Z] tsc_hz: 2200000000 (cyc) 00:11:50.226 [2024-12-06T13:06:37.242Z] ====================================== 00:11:50.226 [2024-12-06T13:06:37.242Z] poller_cost: 7287 (cyc), 3312 (nsec) 00:11:50.226 00:11:50.226 real 0m1.639s 00:11:50.226 user 0m1.413s 00:11:50.226 sys 0m0.113s 00:11:50.226 13:06:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.226 13:06:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:50.226 ************************************ 00:11:50.226 END TEST thread_poller_perf 00:11:50.226 ************************************ 00:11:50.226 13:06:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:50.226 13:06:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:50.226 13:06:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.226 13:06:36 thread -- common/autotest_common.sh@10 -- # set +x 00:11:50.226 ************************************ 00:11:50.226 START TEST thread_poller_perf 00:11:50.226 ************************************ 00:11:50.226 13:06:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:50.226 [2024-12-06 13:06:37.002178] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:50.226 [2024-12-06 13:06:37.002846] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60840 ] 00:11:50.226 [2024-12-06 13:06:37.189624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.483 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:50.483 [2024-12-06 13:06:37.321798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.900 [2024-12-06T13:06:38.916Z] ====================================== 00:11:51.900 [2024-12-06T13:06:38.916Z] busy:2204278601 (cyc) 00:11:51.900 [2024-12-06T13:06:38.916Z] total_run_count: 3498000 00:11:51.900 [2024-12-06T13:06:38.916Z] tsc_hz: 2200000000 (cyc) 00:11:51.900 [2024-12-06T13:06:38.916Z] ====================================== 00:11:51.900 [2024-12-06T13:06:38.916Z] poller_cost: 630 (cyc), 286 (nsec) 00:11:51.900 00:11:51.900 real 0m1.598s 00:11:51.900 user 0m1.388s 00:11:51.900 sys 0m0.101s 00:11:51.900 13:06:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.900 13:06:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:51.900 ************************************ 00:11:51.900 END TEST thread_poller_perf 00:11:51.900 ************************************ 00:11:51.900 13:06:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:51.900 ************************************ 00:11:51.900 END TEST thread 00:11:51.900 ************************************ 00:11:51.900 00:11:51.900 real 0m3.511s 00:11:51.900 user 0m2.932s 00:11:51.900 sys 0m0.349s 00:11:51.900 13:06:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.900 13:06:38 thread -- common/autotest_common.sh@10 -- # set +x 00:11:51.900 13:06:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:51.900 13:06:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:51.900 13:06:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.900 13:06:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.900 13:06:38 -- common/autotest_common.sh@10 -- # set +x 00:11:51.900 ************************************ 00:11:51.900 START TEST app_cmdline 00:11:51.900 ************************************ 00:11:51.900 13:06:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:51.900 * Looking for test storage... 00:11:51.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:51.900 13:06:38 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.900 13:06:38 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.900 13:06:38 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.900 13:06:38 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:51.900 13:06:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:51.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.901 13:06:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.901 --rc genhtml_branch_coverage=1 00:11:51.901 --rc genhtml_function_coverage=1 00:11:51.901 --rc genhtml_legend=1 00:11:51.901 --rc geninfo_all_blocks=1 00:11:51.901 --rc geninfo_unexecuted_blocks=1 00:11:51.901 00:11:51.901 ' 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.901 --rc genhtml_branch_coverage=1 00:11:51.901 --rc genhtml_function_coverage=1 00:11:51.901 --rc genhtml_legend=1 00:11:51.901 --rc geninfo_all_blocks=1 00:11:51.901 --rc geninfo_unexecuted_blocks=1 00:11:51.901 00:11:51.901 ' 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.901 --rc genhtml_branch_coverage=1 00:11:51.901 --rc genhtml_function_coverage=1 00:11:51.901 --rc genhtml_legend=1 00:11:51.901 --rc geninfo_all_blocks=1 00:11:51.901 --rc geninfo_unexecuted_blocks=1 00:11:51.901 00:11:51.901 ' 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.901 --rc genhtml_branch_coverage=1 00:11:51.901 --rc genhtml_function_coverage=1 00:11:51.901 --rc genhtml_legend=1 00:11:51.901 --rc geninfo_all_blocks=1 00:11:51.901 --rc geninfo_unexecuted_blocks=1 00:11:51.901 00:11:51.901 ' 00:11:51.901 13:06:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:51.901 13:06:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:51.901 13:06:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60923 00:11:51.901 13:06:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60923 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60923 ']' 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.901 13:06:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:52.159 [2024-12-06 13:06:38.917169] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:52.159 [2024-12-06 13:06:38.917542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60923 ] 00:11:52.159 [2024-12-06 13:06:39.091852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.417 [2024-12-06 13:06:39.225586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.351 13:06:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.351 13:06:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:53.351 13:06:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:53.610 { 00:11:53.610 "version": "SPDK v25.01-pre git sha1 e9db16374", 00:11:53.610 "fields": { 00:11:53.610 "major": 25, 00:11:53.610 "minor": 1, 00:11:53.610 "patch": 0, 00:11:53.610 "suffix": "-pre", 00:11:53.610 "commit": "e9db16374" 00:11:53.610 } 00:11:53.610 } 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:53.610 13:06:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:53.610 13:06:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:53.868 request: 00:11:53.868 { 00:11:53.868 "method": "env_dpdk_get_mem_stats", 00:11:53.868 "req_id": 1 00:11:53.868 } 00:11:53.868 Got JSON-RPC error response 00:11:53.868 response: 00:11:53.868 { 00:11:53.868 "code": -32601, 00:11:53.868 "message": "Method not found" 00:11:53.868 } 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.868 13:06:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60923 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60923 ']' 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60923 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60923 00:11:53.868 killing process with pid 60923 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60923' 00:11:53.868 13:06:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 60923 00:11:53.869 13:06:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 60923 00:11:56.407 ************************************ 00:11:56.407 END TEST app_cmdline 00:11:56.407 ************************************ 00:11:56.407 00:11:56.407 real 0m4.429s 00:11:56.407 user 0m4.840s 00:11:56.407 sys 0m0.679s 00:11:56.407 13:06:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.407 13:06:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:56.407 13:06:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:56.407 13:06:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.407 13:06:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.407 13:06:43 -- common/autotest_common.sh@10 -- # set +x 00:11:56.407 ************************************ 00:11:56.407 START TEST version 00:11:56.407 ************************************ 00:11:56.407 13:06:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:56.407 * Looking for test storage... 00:11:56.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:56.407 13:06:43 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.407 13:06:43 version -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.407 13:06:43 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.407 13:06:43 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.407 13:06:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.407 13:06:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.407 13:06:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.407 13:06:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.407 13:06:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.407 13:06:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.407 13:06:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.407 13:06:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.407 13:06:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.407 13:06:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.407 13:06:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.407 13:06:43 version -- scripts/common.sh@344 -- # case "$op" in 00:11:56.407 13:06:43 version -- scripts/common.sh@345 -- # : 1 00:11:56.407 13:06:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.407 13:06:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.407 13:06:43 version -- scripts/common.sh@365 -- # decimal 1 00:11:56.407 13:06:43 version -- scripts/common.sh@353 -- # local d=1 00:11:56.407 13:06:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.407 13:06:43 version -- scripts/common.sh@355 -- # echo 1 00:11:56.407 13:06:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.407 13:06:43 version -- scripts/common.sh@366 -- # decimal 2 00:11:56.407 13:06:43 version -- scripts/common.sh@353 -- # local d=2 00:11:56.407 13:06:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.407 13:06:43 version -- scripts/common.sh@355 -- # echo 2 00:11:56.407 13:06:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.407 13:06:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.407 13:06:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.407 13:06:43 version -- scripts/common.sh@368 -- # return 0 00:11:56.407 13:06:43 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.407 13:06:43 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.407 --rc genhtml_branch_coverage=1 00:11:56.407 --rc genhtml_function_coverage=1 00:11:56.407 --rc genhtml_legend=1 00:11:56.407 --rc geninfo_all_blocks=1 00:11:56.408 --rc geninfo_unexecuted_blocks=1 00:11:56.408 00:11:56.408 ' 00:11:56.408 13:06:43 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.408 --rc genhtml_branch_coverage=1 00:11:56.408 --rc genhtml_function_coverage=1 00:11:56.408 --rc genhtml_legend=1 00:11:56.408 --rc geninfo_all_blocks=1 00:11:56.408 --rc geninfo_unexecuted_blocks=1 00:11:56.408 00:11:56.408 ' 00:11:56.408 13:06:43 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.408 --rc genhtml_branch_coverage=1 00:11:56.408 --rc genhtml_function_coverage=1 00:11:56.408 --rc genhtml_legend=1 00:11:56.408 --rc geninfo_all_blocks=1 00:11:56.408 --rc geninfo_unexecuted_blocks=1 00:11:56.408 00:11:56.408 ' 00:11:56.408 13:06:43 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.408 --rc genhtml_branch_coverage=1 00:11:56.408 --rc genhtml_function_coverage=1 00:11:56.408 --rc genhtml_legend=1 00:11:56.408 --rc geninfo_all_blocks=1 00:11:56.408 --rc geninfo_unexecuted_blocks=1 00:11:56.408 00:11:56.408 ' 00:11:56.408 13:06:43 version -- app/version.sh@17 -- # get_header_version major 00:11:56.408 13:06:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # tr -d '"' 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # cut -f2 00:11:56.408 13:06:43 version -- app/version.sh@17 -- # major=25 00:11:56.408 13:06:43 version -- app/version.sh@18 -- # get_header_version minor 00:11:56.408 13:06:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # tr -d '"' 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # cut -f2 00:11:56.408 13:06:43 version -- app/version.sh@18 -- # minor=1 00:11:56.408 13:06:43 version -- app/version.sh@19 -- # get_header_version patch 00:11:56.408 13:06:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # tr -d '"' 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # cut -f2 00:11:56.408 13:06:43 version -- app/version.sh@19 -- # patch=0 00:11:56.408 13:06:43 version -- app/version.sh@20 -- # get_header_version suffix 00:11:56.408 13:06:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # cut -f2 00:11:56.408 13:06:43 version -- app/version.sh@14 -- # tr -d '"' 00:11:56.408 13:06:43 version -- app/version.sh@20 -- # suffix=-pre 00:11:56.408 13:06:43 version -- app/version.sh@22 -- # version=25.1 00:11:56.408 13:06:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:56.408 13:06:43 version -- app/version.sh@28 -- # version=25.1rc0 00:11:56.408 13:06:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:56.408 13:06:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:56.408 13:06:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:56.408 13:06:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:56.408 00:11:56.408 real 0m0.231s 00:11:56.408 user 0m0.144s 00:11:56.408 sys 0m0.121s 00:11:56.408 ************************************ 00:11:56.408 END TEST version 00:11:56.408 ************************************ 00:11:56.408 13:06:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.408 13:06:43 version -- common/autotest_common.sh@10 -- # set +x 00:11:56.408 13:06:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:56.408 13:06:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:56.408 13:06:43 -- spdk/autotest.sh@194 -- # uname -s 00:11:56.408 13:06:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:56.408 13:06:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:56.408 13:06:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:56.408 13:06:43 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:11:56.408 13:06:43 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:56.408 13:06:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.408 13:06:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.408 13:06:43 -- common/autotest_common.sh@10 -- # set +x 00:11:56.408 ************************************ 00:11:56.408 START TEST blockdev_nvme 00:11:56.408 ************************************ 00:11:56.408 13:06:43 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:56.667 * Looking for test storage... 00:11:56.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.667 13:06:43 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.667 --rc genhtml_branch_coverage=1 00:11:56.667 --rc genhtml_function_coverage=1 00:11:56.667 --rc genhtml_legend=1 00:11:56.667 --rc geninfo_all_blocks=1 00:11:56.667 --rc geninfo_unexecuted_blocks=1 00:11:56.667 00:11:56.667 ' 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.667 --rc genhtml_branch_coverage=1 00:11:56.667 --rc genhtml_function_coverage=1 00:11:56.667 --rc genhtml_legend=1 00:11:56.667 --rc geninfo_all_blocks=1 00:11:56.667 --rc geninfo_unexecuted_blocks=1 00:11:56.667 00:11:56.667 ' 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.667 --rc genhtml_branch_coverage=1 00:11:56.667 --rc genhtml_function_coverage=1 00:11:56.667 --rc genhtml_legend=1 00:11:56.667 --rc geninfo_all_blocks=1 00:11:56.667 --rc geninfo_unexecuted_blocks=1 00:11:56.667 00:11:56.667 ' 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.667 --rc genhtml_branch_coverage=1 00:11:56.667 --rc genhtml_function_coverage=1 00:11:56.667 --rc genhtml_legend=1 00:11:56.667 --rc geninfo_all_blocks=1 00:11:56.667 --rc geninfo_unexecuted_blocks=1 00:11:56.667 00:11:56.667 ' 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:56.667 13:06:43 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:11:56.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61117 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61117 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61117 ']' 00:11:56.667 13:06:43 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.667 13:06:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.925 [2024-12-06 13:06:43.713961] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:56.925 [2024-12-06 13:06:43.714405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:11:56.925 [2024-12-06 13:06:43.905679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.184 [2024-12-06 13:06:44.059737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.120 13:06:44 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.120 13:06:44 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:11:58.120 13:06:44 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:11:58.120 13:06:44 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:11:58.120 13:06:44 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:11:58.120 13:06:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:58.120 13:06:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:58.120 13:06:45 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:58.120 13:06:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.120 13:06:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.378 13:06:45 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.378 13:06:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:11:58.378 13:06:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.378 13:06:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.378 13:06:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.378 13:06:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.636 13:06:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.636 13:06:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:58.636 13:06:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:58.636 13:06:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.637 13:06:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.637 13:06:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:58.637 13:06:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.637 13:06:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:58.637 13:06:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:58.638 13:06:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "03d06b9f-db69-4d7d-82f2-ea9e99b717ef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03d06b9f-db69-4d7d-82f2-ea9e99b717ef",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fc401f79-d367-440a-b194-459b48522397"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fc401f79-d367-440a-b194-459b48522397",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f39db658-d88c-478c-9977-75d01c918650"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f39db658-d88c-478c-9977-75d01c918650",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a2f590e1-4763-45ac-b1bd-a5d62dd7544c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a2f590e1-4763-45ac-b1bd-a5d62dd7544c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "15c730d0-15b1-4bb0-b0ba-8315a4c64872"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "15c730d0-15b1-4bb0-b0ba-8315a4c64872",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "000d06e2-b378-43aa-8435-fc440024783c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "000d06e2-b378-43aa-8435-fc440024783c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:58.638 13:06:45 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:58.638 13:06:45 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:58.638 13:06:45 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:58.638 13:06:45 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61117 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61117 ']' 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61117 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61117 00:11:58.638 killing process with pid 61117 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61117' 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61117 00:11:58.638 13:06:45 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61117 00:12:01.163 13:06:47 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:01.163 13:06:47 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:01.163 13:06:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:01.163 13:06:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.163 13:06:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.163 ************************************ 00:12:01.163 START TEST bdev_hello_world 00:12:01.163 ************************************ 00:12:01.163 13:06:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:01.163 [2024-12-06 13:06:47.929465] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:01.163 [2024-12-06 13:06:47.929649] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61212 ] 00:12:01.163 [2024-12-06 13:06:48.113026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.420 [2024-12-06 13:06:48.242972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.983 [2024-12-06 13:06:48.910812] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:01.983 [2024-12-06 13:06:48.910876] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:01.983 [2024-12-06 13:06:48.910906] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:01.983 [2024-12-06 13:06:48.914069] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:01.983 [2024-12-06 13:06:48.914539] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:01.983 [2024-12-06 13:06:48.914575] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:01.983 [2024-12-06 13:06:48.914814] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:01.983 00:12:01.983 [2024-12-06 13:06:48.914852] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:03.356 ************************************ 00:12:03.356 END TEST bdev_hello_world 00:12:03.356 ************************************ 00:12:03.356 00:12:03.356 real 0m2.154s 00:12:03.356 user 0m1.769s 00:12:03.356 sys 0m0.274s 00:12:03.356 13:06:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.356 13:06:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:03.356 13:06:50 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:12:03.356 13:06:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.356 13:06:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.356 13:06:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:03.356 ************************************ 00:12:03.356 START TEST bdev_bounds 00:12:03.356 ************************************ 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61254 00:12:03.356 Process bdevio pid: 61254 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61254' 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61254 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61254 ']' 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.356 13:06:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:03.356 [2024-12-06 13:06:50.141916] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:03.356 [2024-12-06 13:06:50.142732] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:12:03.356 [2024-12-06 13:06:50.357099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:03.614 [2024-12-06 13:06:50.493379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.614 [2024-12-06 13:06:50.493520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.614 [2024-12-06 13:06:50.493532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.234 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.234 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:04.234 13:06:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:04.492 I/O targets: 00:12:04.492 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:04.492 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:04.492 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:04.492 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:04.492 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:04.492 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:04.492 00:12:04.492 00:12:04.492 CUnit - A unit testing framework for C - Version 2.1-3 00:12:04.492 http://cunit.sourceforge.net/ 00:12:04.492 00:12:04.492 00:12:04.492 Suite: bdevio tests on: Nvme3n1 00:12:04.492 Test: blockdev write read block ...passed 00:12:04.492 Test: blockdev write zeroes read block ...passed 00:12:04.492 Test: blockdev write zeroes read no split ...passed 00:12:04.493 Test: blockdev write zeroes read split ...passed 00:12:04.493 Test: blockdev write zeroes read split partial ...passed 00:12:04.493 Test: blockdev reset ...[2024-12-06 13:06:51.402140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:04.493 passed 00:12:04.493 Test: blockdev write read 8 blocks ...[2024-12-06 13:06:51.405894] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:04.493 passed 00:12:04.493 Test: blockdev write read size > 128k ...passed 00:12:04.493 Test: blockdev write read invalid size ...passed 00:12:04.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.493 Test: blockdev write read max offset ...passed 00:12:04.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.493 Test: blockdev writev readv 8 blocks ...passed 00:12:04.493 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.493 Test: blockdev writev readv block ...passed 00:12:04.493 Test: blockdev writev readv size > 128k ...passed 00:12:04.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.493 Test: blockdev comparev and writev ...[2024-12-06 13:06:51.415031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc20a000 len:0x1000 00:12:04.493 [2024-12-06 13:06:51.415095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:04.493 passed 00:12:04.493 Test: blockdev nvme passthru rw ...passed 00:12:04.493 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:06:51.415908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:04.493 passed 00:12:04.493 Test: blockdev nvme admin passthru ...[2024-12-06 13:06:51.415961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:04.493 passed 00:12:04.493 Test: blockdev copy ...passed 00:12:04.493 Suite: bdevio tests on: Nvme2n3 00:12:04.493 Test: blockdev write read block ...passed 00:12:04.493 Test: blockdev write zeroes read block ...passed 00:12:04.493 Test: blockdev write zeroes read no split ...passed 00:12:04.493 Test: blockdev write zeroes read split ...passed 00:12:04.493 Test: blockdev write zeroes read split partial ...passed 00:12:04.493 Test: blockdev reset ...[2024-12-06 13:06:51.481120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:04.493 [2024-12-06 13:06:51.485483] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:04.493 passed 00:12:04.493 Test: blockdev write read 8 blocks ...passed 00:12:04.493 Test: blockdev write read size > 128k ...passed 00:12:04.493 Test: blockdev write read invalid size ...passed 00:12:04.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.493 Test: blockdev write read max offset ...passed 00:12:04.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.493 Test: blockdev writev readv 8 blocks ...passed 00:12:04.493 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.493 Test: blockdev writev readv block ...passed 00:12:04.493 Test: blockdev writev readv size > 128k ...passed 00:12:04.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.493 Test: blockdev comparev and writev ...[2024-12-06 13:06:51.493663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:12:04.493 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x29ec06000 len:0x1000 00:12:04.493 [2024-12-06 13:06:51.493880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:04.493 passed 00:12:04.493 Test: blockdev nvme passthru vendor specific ...passed 00:12:04.493 Test: blockdev nvme admin passthru ...[2024-12-06 13:06:51.494565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:04.493 [2024-12-06 13:06:51.494613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:04.493 passed 00:12:04.493 Test: blockdev copy ...passed 00:12:04.493 Suite: bdevio tests on: Nvme2n2 00:12:04.493 Test: blockdev write read block ...passed 00:12:04.493 Test: blockdev write zeroes read block ...passed 00:12:04.752 Test: blockdev write zeroes read no split ...passed 00:12:04.752 Test: blockdev write zeroes read split ...passed 00:12:04.752 Test: blockdev write zeroes read split partial ...passed 00:12:04.752 Test: blockdev reset ...[2024-12-06 13:06:51.558592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:04.752 [2024-12-06 13:06:51.562899] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:04.752 passed 00:12:04.752 Test: blockdev write read 8 blocks ...passed 00:12:04.752 Test: blockdev write read size > 128k ...passed 00:12:04.752 Test: blockdev write read invalid size ...passed 00:12:04.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.752 Test: blockdev write read max offset ...passed 00:12:04.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.752 Test: blockdev writev readv 8 blocks ...passed 00:12:04.752 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.752 Test: blockdev writev readv block ...passed 00:12:04.752 Test: blockdev writev readv size > 128k ...passed 00:12:04.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.752 Test: blockdev comparev and writev ...[2024-12-06 13:06:51.572308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc23c000 len:0x1000 00:12:04.752 [2024-12-06 13:06:51.572526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:04.752 passed 00:12:04.752 Test: blockdev nvme passthru rw ...passed 00:12:04.752 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:06:51.573945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:04.752 [2024-12-06 13:06:51.574122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:12:04.752 00:12:04.752 Test: blockdev nvme admin passthru ...passed 00:12:04.752 Test: blockdev copy ...passed 00:12:04.752 Suite: bdevio tests on: Nvme2n1 00:12:04.752 Test: blockdev write read block ...passed 00:12:04.752 Test: blockdev write zeroes read block ...passed 00:12:04.752 Test: blockdev write zeroes read no split ...passed 00:12:04.752 Test: blockdev write zeroes read split ...passed 00:12:04.752 Test: blockdev write zeroes read split partial ...passed 00:12:04.752 Test: blockdev reset ...[2024-12-06 13:06:51.634615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:04.752 passed 00:12:04.752 Test: blockdev write read 8 blocks ...[2024-12-06 13:06:51.639087] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:04.752 passed 00:12:04.752 Test: blockdev write read size > 128k ...passed 00:12:04.752 Test: blockdev write read invalid size ...passed 00:12:04.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.752 Test: blockdev write read max offset ...passed 00:12:04.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.752 Test: blockdev writev readv 8 blocks ...passed 00:12:04.752 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.752 Test: blockdev writev readv block ...passed 00:12:04.752 Test: blockdev writev readv size > 128k ...passed 00:12:04.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.752 Test: blockdev comparev and writev ...[2024-12-06 13:06:51.646914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc238000 len:0x1000 00:12:04.752 [2024-12-06 13:06:51.646983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:04.752 passed 00:12:04.752 Test: blockdev nvme passthru rw ...passed 00:12:04.752 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:06:51.647788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:04.752 [2024-12-06 13:06:51.647830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:04.752 passed 00:12:04.752 Test: blockdev nvme admin passthru ...passed 00:12:04.752 Test: blockdev copy ...passed 00:12:04.752 Suite: bdevio tests on: Nvme1n1 00:12:04.752 Test: blockdev write read block ...passed 00:12:04.752 Test: blockdev write zeroes read block ...passed 00:12:04.752 Test: blockdev write zeroes read no split ...passed 00:12:04.752 Test: blockdev write zeroes read split ...passed 00:12:04.752 Test: blockdev write zeroes read split partial ...passed 00:12:04.752 Test: blockdev reset ...[2024-12-06 13:06:51.712097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:04.752 [2024-12-06 13:06:51.715695] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:04.752 passed 00:12:04.752 Test: blockdev write read 8 blocks ...passed 00:12:04.752 Test: blockdev write read size > 128k ...passed 00:12:04.752 Test: blockdev write read invalid size ...passed 00:12:04.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:04.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:04.752 Test: blockdev write read max offset ...passed 00:12:04.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:04.752 Test: blockdev writev readv 8 blocks ...passed 00:12:04.752 Test: blockdev writev readv 30 x 1block ...passed 00:12:04.752 Test: blockdev writev readv block ...passed 00:12:04.752 Test: blockdev writev readv size > 128k ...passed 00:12:04.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:04.752 Test: blockdev comparev and writev ...[2024-12-06 13:06:51.724763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc234000 len:0x1000 00:12:04.752 [2024-12-06 13:06:51.724974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:04.752 passed 00:12:04.752 Test: blockdev nvme passthru rw ...passed 00:12:04.752 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:06:51.726280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:04.752 [2024-12-06 13:06:51.726450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:04.752 passed 00:12:04.752 Test: blockdev nvme admin passthru ...passed 00:12:04.752 Test: blockdev copy ...passed 00:12:04.752 Suite: bdevio tests on: Nvme0n1 00:12:04.752 Test: blockdev write read block ...passed 00:12:04.752 Test: blockdev write zeroes read block ...passed 00:12:04.752 Test: blockdev write zeroes read no split ...passed 00:12:04.752 Test: blockdev write zeroes read split ...passed 00:12:05.011 Test: blockdev write zeroes read split partial ...passed 00:12:05.011 Test: blockdev reset ...[2024-12-06 13:06:51.802008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:05.011 [2024-12-06 13:06:51.805874] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:05.011 passed 00:12:05.011 Test: blockdev write read 8 blocks ...passed 00:12:05.011 Test: blockdev write read size > 128k ...passed 00:12:05.011 Test: blockdev write read invalid size ...passed 00:12:05.011 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:05.011 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:05.011 Test: blockdev write read max offset ...passed 00:12:05.011 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:05.011 Test: blockdev writev readv 8 blocks ...passed 00:12:05.011 Test: blockdev writev readv 30 x 1block ...passed 00:12:05.011 Test: blockdev writev readv block ...passed 00:12:05.011 Test: blockdev writev readv size > 128k ...passed 00:12:05.011 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:05.011 Test: blockdev comparev and writev ...passed 00:12:05.011 Test: blockdev nvme passthru rw ...[2024-12-06 13:06:51.815673] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:05.011 separate metadata which is not supported yet. 00:12:05.011 passed 00:12:05.011 Test: blockdev nvme passthru vendor specific ...passed 00:12:05.011 Test: blockdev nvme admin passthru ...[2024-12-06 13:06:51.816429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:05.011 [2024-12-06 13:06:51.816497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:05.011 passed 00:12:05.011 Test: blockdev copy ...passed 00:12:05.011 00:12:05.011 Run Summary: Type Total Ran Passed Failed Inactive 00:12:05.011 suites 6 6 n/a 0 0 00:12:05.011 tests 138 138 138 0 0 00:12:05.011 asserts 893 893 893 0 n/a 00:12:05.011 00:12:05.011 Elapsed time = 1.278 seconds 00:12:05.011 0 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61254 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61254 ']' 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61254 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61254 00:12:05.011 killing process with pid 61254 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61254' 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61254 00:12:05.011 13:06:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61254 00:12:05.943 ************************************ 00:12:05.943 END TEST bdev_bounds 00:12:05.943 ************************************ 00:12:05.943 13:06:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:05.943 00:12:05.943 real 0m2.798s 00:12:05.943 user 0m7.155s 00:12:05.943 sys 0m0.445s 00:12:05.943 13:06:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.943 13:06:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 13:06:52 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:05.943 13:06:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:05.943 13:06:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.943 13:06:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 ************************************ 00:12:05.943 START TEST bdev_nbd 00:12:05.943 ************************************ 00:12:05.943 13:06:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:05.943 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:05.943 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:05.943 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.943 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.943 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:05.943 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61318 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61318 /var/tmp/spdk-nbd.sock 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61318 ']' 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.944 13:06:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:06.202 [2024-12-06 13:06:52.999343] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:06.202 [2024-12-06 13:06:52.999543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.202 [2024-12-06 13:06:53.190796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.459 [2024-12-06 13:06:53.350257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:07.391 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.392 1+0 records in 00:12:07.392 1+0 records out 00:12:07.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637183 s, 6.4 MB/s 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:07.392 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:12:07.649 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:07.649 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.907 1+0 records in 00:12:07.907 1+0 records out 00:12:07.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547582 s, 7.5 MB/s 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:07.907 13:06:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.165 1+0 records in 00:12:08.165 1+0 records out 00:12:08.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615103 s, 6.7 MB/s 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:08.165 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.428 1+0 records in 00:12:08.428 1+0 records out 00:12:08.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597116 s, 6.9 MB/s 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:08.428 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.003 1+0 records in 00:12:09.003 1+0 records out 00:12:09.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443041 s, 9.2 MB/s 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:09.003 13:06:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.262 1+0 records in 00:12:09.262 1+0 records out 00:12:09.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000859729 s, 4.8 MB/s 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:09.262 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd0", 00:12:09.520 "bdev_name": "Nvme0n1" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd1", 00:12:09.520 "bdev_name": "Nvme1n1" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd2", 00:12:09.520 "bdev_name": "Nvme2n1" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd3", 00:12:09.520 "bdev_name": "Nvme2n2" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd4", 00:12:09.520 "bdev_name": "Nvme2n3" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd5", 00:12:09.520 "bdev_name": "Nvme3n1" 00:12:09.520 } 00:12:09.520 ]' 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd0", 00:12:09.520 "bdev_name": "Nvme0n1" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd1", 00:12:09.520 "bdev_name": "Nvme1n1" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd2", 00:12:09.520 "bdev_name": "Nvme2n1" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd3", 00:12:09.520 "bdev_name": "Nvme2n2" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd4", 00:12:09.520 "bdev_name": "Nvme2n3" 00:12:09.520 }, 00:12:09.520 { 00:12:09.520 "nbd_device": "/dev/nbd5", 00:12:09.520 "bdev_name": "Nvme3n1" 00:12:09.520 } 00:12:09.520 ]' 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.520 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.777 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.035 13:06:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.293 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:10.551 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.809 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.067 13:06:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.325 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:11.583 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:11.841 /dev/nbd0 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.841 1+0 records in 00:12:11.841 1+0 records out 00:12:11.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515535 s, 7.9 MB/s 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:11.841 13:06:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:12:12.099 /dev/nbd1 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.099 1+0 records in 00:12:12.099 1+0 records out 00:12:12.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484636 s, 8.5 MB/s 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:12.099 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:12:12.357 /dev/nbd10 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.357 1+0 records in 00:12:12.357 1+0 records out 00:12:12.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466349 s, 8.8 MB/s 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:12.357 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:12:12.615 /dev/nbd11 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.873 1+0 records in 00:12:12.873 1+0 records out 00:12:12.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566134 s, 7.2 MB/s 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:12.873 13:06:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:12:13.131 /dev/nbd12 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.131 1+0 records in 00:12:13.131 1+0 records out 00:12:13.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657958 s, 6.2 MB/s 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:13.131 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:12:13.388 /dev/nbd13 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.388 1+0 records in 00:12:13.388 1+0 records out 00:12:13.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000856882 s, 4.8 MB/s 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.388 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd0", 00:12:13.952 "bdev_name": "Nvme0n1" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd1", 00:12:13.952 "bdev_name": "Nvme1n1" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd10", 00:12:13.952 "bdev_name": "Nvme2n1" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd11", 00:12:13.952 "bdev_name": "Nvme2n2" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd12", 00:12:13.952 "bdev_name": "Nvme2n3" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd13", 00:12:13.952 "bdev_name": "Nvme3n1" 00:12:13.952 } 00:12:13.952 ]' 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd0", 00:12:13.952 "bdev_name": "Nvme0n1" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd1", 00:12:13.952 "bdev_name": "Nvme1n1" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd10", 00:12:13.952 "bdev_name": "Nvme2n1" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd11", 00:12:13.952 "bdev_name": "Nvme2n2" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd12", 00:12:13.952 "bdev_name": "Nvme2n3" 00:12:13.952 }, 00:12:13.952 { 00:12:13.952 "nbd_device": "/dev/nbd13", 00:12:13.952 "bdev_name": "Nvme3n1" 00:12:13.952 } 00:12:13.952 ]' 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:13.952 /dev/nbd1 00:12:13.952 /dev/nbd10 00:12:13.952 /dev/nbd11 00:12:13.952 /dev/nbd12 00:12:13.952 /dev/nbd13' 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:13.952 /dev/nbd1 00:12:13.952 /dev/nbd10 00:12:13.952 /dev/nbd11 00:12:13.952 /dev/nbd12 00:12:13.952 /dev/nbd13' 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:13.952 256+0 records in 00:12:13.952 256+0 records out 00:12:13.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00744273 s, 141 MB/s 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:13.952 256+0 records in 00:12:13.952 256+0 records out 00:12:13.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15205 s, 6.9 MB/s 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:13.952 13:07:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:14.210 256+0 records in 00:12:14.210 256+0 records out 00:12:14.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140735 s, 7.5 MB/s 00:12:14.210 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.210 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:14.210 256+0 records in 00:12:14.210 256+0 records out 00:12:14.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163144 s, 6.4 MB/s 00:12:14.210 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.466 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:14.466 256+0 records in 00:12:14.466 256+0 records out 00:12:14.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14272 s, 7.3 MB/s 00:12:14.466 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.466 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:14.725 256+0 records in 00:12:14.725 256+0 records out 00:12:14.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143471 s, 7.3 MB/s 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:14.725 256+0 records in 00:12:14.725 256+0 records out 00:12:14.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144933 s, 7.2 MB/s 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.725 13:07:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.290 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.548 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.806 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.064 13:07:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.322 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.580 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:16.838 13:07:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:17.096 malloc_lvol_verify 00:12:17.354 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:17.612 ca7aae76-05a1-41f5-bab7-f160a39b1935 00:12:17.612 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:17.871 f7332e32-7b39-4326-aedc-202c3e8bd85b 00:12:17.871 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:18.129 /dev/nbd0 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:18.129 mke2fs 1.47.0 (5-Feb-2023) 00:12:18.129 Discarding device blocks: 0/4096 done 00:12:18.129 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:18.129 00:12:18.129 Allocating group tables: 0/1 done 00:12:18.129 Writing inode tables: 0/1 done 00:12:18.129 Creating journal (1024 blocks): done 00:12:18.129 Writing superblocks and filesystem accounting information: 0/1 done 00:12:18.129 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:18.129 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.130 13:07:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61318 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61318 ']' 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61318 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61318 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61318' 00:12:18.387 killing process with pid 61318 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61318 00:12:18.387 13:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61318 00:12:19.761 13:07:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:19.761 00:12:19.761 real 0m13.545s 00:12:19.761 user 0m19.479s 00:12:19.761 sys 0m4.244s 00:12:19.761 13:07:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.761 13:07:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:19.761 ************************************ 00:12:19.761 END TEST bdev_nbd 00:12:19.761 ************************************ 00:12:19.761 13:07:06 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:12:19.761 13:07:06 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:12:19.761 skipping fio tests on NVMe due to multi-ns failures. 00:12:19.761 13:07:06 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:19.761 13:07:06 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:19.761 13:07:06 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:19.761 13:07:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:19.761 13:07:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.761 13:07:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.761 ************************************ 00:12:19.761 START TEST bdev_verify 00:12:19.761 ************************************ 00:12:19.761 13:07:06 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:19.761 [2024-12-06 13:07:06.588310] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:19.761 [2024-12-06 13:07:06.588494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61732 ] 00:12:20.019 [2024-12-06 13:07:06.778918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:20.019 [2024-12-06 13:07:06.928991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.019 [2024-12-06 13:07:06.929002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.952 Running I/O for 5 seconds... 00:12:22.899 20736.00 IOPS, 81.00 MiB/s [2024-12-06T13:07:10.847Z] 19968.00 IOPS, 78.00 MiB/s [2024-12-06T13:07:12.219Z] 19605.33 IOPS, 76.58 MiB/s [2024-12-06T13:07:12.783Z] 19184.00 IOPS, 74.94 MiB/s [2024-12-06T13:07:12.783Z] 19174.40 IOPS, 74.90 MiB/s 00:12:25.767 Latency(us) 00:12:25.767 [2024-12-06T13:07:12.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.767 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x0 length 0xbd0bd 00:12:25.767 Nvme0n1 : 5.06 1594.10 6.23 0.00 0.00 80116.80 13226.36 78166.57 00:12:25.767 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:25.767 Nvme0n1 : 5.05 1571.46 6.14 0.00 0.00 81200.99 16920.20 97708.22 00:12:25.767 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x0 length 0xa0000 00:12:25.767 Nvme1n1 : 5.06 1593.61 6.23 0.00 0.00 79985.20 13405.09 73400.32 00:12:25.767 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0xa0000 length 0xa0000 00:12:25.767 Nvme1n1 : 5.05 1570.84 6.14 0.00 0.00 81128.70 20137.43 91035.46 00:12:25.767 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x0 length 0x80000 00:12:25.767 Nvme2n1 : 5.06 1593.08 6.22 0.00 0.00 79896.88 13702.98 69110.69 00:12:25.767 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x80000 length 0x80000 00:12:25.767 Nvme2n1 : 5.05 1570.29 6.13 0.00 0.00 80939.02 19541.64 86269.21 00:12:25.767 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x0 length 0x80000 00:12:25.767 Nvme2n2 : 5.06 1592.63 6.22 0.00 0.00 79767.51 13881.72 70540.57 00:12:25.767 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x80000 length 0x80000 00:12:25.767 Nvme2n2 : 5.06 1569.60 6.13 0.00 0.00 80817.47 18469.24 91988.71 00:12:25.767 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x0 length 0x80000 00:12:25.767 Nvme2n3 : 5.06 1592.11 6.22 0.00 0.00 79637.68 14000.87 75306.82 00:12:25.767 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x80000 length 0x80000 00:12:25.767 Nvme2n3 : 5.07 1577.84 6.16 0.00 0.00 80296.25 3842.79 97231.59 00:12:25.767 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x0 length 0x20000 00:12:25.767 Nvme3n1 : 5.07 1591.61 6.22 0.00 0.00 79519.00 10783.65 77689.95 00:12:25.767 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.767 Verification LBA range: start 0x20000 length 0x20000 00:12:25.767 Nvme3n1 : 5.08 1586.32 6.20 0.00 0.00 79820.43 9651.67 99614.72 00:12:25.767 [2024-12-06T13:07:12.783Z] =================================================================================================================== 00:12:25.767 [2024-12-06T13:07:12.783Z] Total : 19003.50 74.23 0.00 0.00 80256.42 3842.79 99614.72 00:12:27.139 00:12:27.139 real 0m7.669s 00:12:27.139 user 0m14.032s 00:12:27.139 sys 0m0.352s 00:12:27.139 13:07:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.139 ************************************ 00:12:27.139 END TEST bdev_verify 00:12:27.139 ************************************ 00:12:27.139 13:07:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:27.397 13:07:14 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:27.397 13:07:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:27.397 13:07:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.397 13:07:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.397 ************************************ 00:12:27.397 START TEST bdev_verify_big_io 00:12:27.397 ************************************ 00:12:27.397 13:07:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:27.397 [2024-12-06 13:07:14.310609] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:27.397 [2024-12-06 13:07:14.310823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61835 ] 00:12:27.655 [2024-12-06 13:07:14.497650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:27.655 [2024-12-06 13:07:14.636333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.655 [2024-12-06 13:07:14.636342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.587 Running I/O for 5 seconds... 00:12:34.144 1835.00 IOPS, 114.69 MiB/s [2024-12-06T13:07:21.488Z] 2952.00 IOPS, 184.50 MiB/s [2024-12-06T13:07:21.488Z] 2832.33 IOPS, 177.02 MiB/s 00:12:34.472 Latency(us) 00:12:34.472 [2024-12-06T13:07:21.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.472 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.472 Verification LBA range: start 0x0 length 0xbd0b 00:12:34.472 Nvme0n1 : 5.75 120.88 7.55 0.00 0.00 996129.03 21090.68 1448941.38 00:12:34.472 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.472 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:34.472 Nvme0n1 : 5.77 122.02 7.63 0.00 0.00 1019875.31 20971.52 960876.92 00:12:34.472 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.472 Verification LBA range: start 0x0 length 0xa000 00:12:34.472 Nvme1n1 : 5.75 123.55 7.72 0.00 0.00 956789.57 40751.48 1464193.40 00:12:34.472 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.472 Verification LBA range: start 0xa000 length 0xa000 00:12:34.472 Nvme1n1 : 5.77 121.93 7.62 0.00 0.00 994892.33 24307.90 934185.89 00:12:34.472 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.472 Verification LBA range: start 0x0 length 0x8000 00:12:34.472 Nvme2n1 : 5.80 131.09 8.19 0.00 0.00 889907.23 39321.60 1349803.29 00:12:34.472 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.473 Verification LBA range: start 0x8000 length 0x8000 00:12:34.473 Nvme2n1 : 5.78 121.87 7.62 0.00 0.00 966981.82 25261.15 949437.91 00:12:34.473 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.473 Verification LBA range: start 0x0 length 0x8000 00:12:34.473 Nvme2n2 : 5.85 139.82 8.74 0.00 0.00 811486.01 20137.43 1151527.10 00:12:34.473 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.473 Verification LBA range: start 0x8000 length 0x8000 00:12:34.473 Nvme2n2 : 5.81 129.12 8.07 0.00 0.00 895665.26 22878.02 968502.92 00:12:34.473 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.473 Verification LBA range: start 0x0 length 0x8000 00:12:34.473 Nvme2n3 : 5.88 139.04 8.69 0.00 0.00 789654.76 30504.03 1578583.51 00:12:34.473 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.473 Verification LBA range: start 0x8000 length 0x8000 00:12:34.473 Nvme2n3 : 5.81 129.08 8.07 0.00 0.00 869015.62 23235.49 999006.95 00:12:34.473 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.473 Verification LBA range: start 0x0 length 0x2000 00:12:34.473 Nvme3n1 : 5.91 159.66 9.98 0.00 0.00 674661.51 3291.69 1593835.52 00:12:34.473 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.473 Verification LBA range: start 0x2000 length 0x2000 00:12:34.473 Nvme3n1 : 5.82 136.08 8.51 0.00 0.00 801841.05 6047.19 1006632.96 00:12:34.473 [2024-12-06T13:07:21.489Z] =================================================================================================================== 00:12:34.473 [2024-12-06T13:07:21.489Z] Total : 1574.14 98.38 0.00 0.00 880136.67 3291.69 1593835.52 00:12:36.372 00:12:36.372 real 0m8.917s 00:12:36.372 user 0m16.524s 00:12:36.372 sys 0m0.372s 00:12:36.372 13:07:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.372 ************************************ 00:12:36.372 END TEST bdev_verify_big_io 00:12:36.372 ************************************ 00:12:36.372 13:07:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.372 13:07:23 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.372 13:07:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:36.372 13:07:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.372 13:07:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.372 ************************************ 00:12:36.372 START TEST bdev_write_zeroes 00:12:36.372 ************************************ 00:12:36.372 13:07:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.372 [2024-12-06 13:07:23.281081] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:36.372 [2024-12-06 13:07:23.281341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:12:36.631 [2024-12-06 13:07:23.469826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.631 [2024-12-06 13:07:23.601630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.562 Running I/O for 1 seconds... 00:12:38.492 54912.00 IOPS, 214.50 MiB/s 00:12:38.492 Latency(us) 00:12:38.492 [2024-12-06T13:07:25.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.492 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:38.492 Nvme0n1 : 1.03 9089.37 35.51 0.00 0.00 14044.90 10724.07 28597.53 00:12:38.492 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:38.492 Nvme1n1 : 1.03 9075.81 35.45 0.00 0.00 14045.28 11141.12 28001.75 00:12:38.492 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:38.492 Nvme2n1 : 1.03 9062.22 35.40 0.00 0.00 14017.52 10604.92 27286.81 00:12:38.492 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:38.492 Nvme2n2 : 1.03 9048.79 35.35 0.00 0.00 13959.36 7864.32 26571.87 00:12:38.492 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:38.492 Nvme2n3 : 1.03 9035.38 35.29 0.00 0.00 13943.86 6702.55 26691.03 00:12:38.492 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:38.492 Nvme3n1 : 1.04 9021.87 35.24 0.00 0.00 13941.01 6702.55 28835.84 00:12:38.492 [2024-12-06T13:07:25.508Z] =================================================================================================================== 00:12:38.492 [2024-12-06T13:07:25.508Z] Total : 54333.44 212.24 0.00 0.00 13991.99 6702.55 28835.84 00:12:39.865 00:12:39.865 real 0m3.321s 00:12:39.865 user 0m2.874s 00:12:39.865 sys 0m0.322s 00:12:39.865 13:07:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.865 13:07:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:39.865 ************************************ 00:12:39.865 END TEST bdev_write_zeroes 00:12:39.865 ************************************ 00:12:39.865 13:07:26 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:39.865 13:07:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:39.865 13:07:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.865 13:07:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.865 ************************************ 00:12:39.865 START TEST bdev_json_nonenclosed 00:12:39.865 ************************************ 00:12:39.865 13:07:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:39.865 [2024-12-06 13:07:26.657026] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:39.865 [2024-12-06 13:07:26.657263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62003 ] 00:12:39.865 [2024-12-06 13:07:26.849838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.123 [2024-12-06 13:07:27.008436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.123 [2024-12-06 13:07:27.008599] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:40.123 [2024-12-06 13:07:27.008651] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:40.123 [2024-12-06 13:07:27.008679] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:40.382 00:12:40.382 real 0m0.739s 00:12:40.382 user 0m0.476s 00:12:40.382 sys 0m0.157s 00:12:40.382 13:07:27 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.382 13:07:27 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:40.382 ************************************ 00:12:40.382 END TEST bdev_json_nonenclosed 00:12:40.382 ************************************ 00:12:40.382 13:07:27 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.382 13:07:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:40.382 13:07:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.382 13:07:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.382 ************************************ 00:12:40.382 START TEST bdev_json_nonarray 00:12:40.382 ************************************ 00:12:40.382 13:07:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.640 [2024-12-06 13:07:27.453430] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:40.640 [2024-12-06 13:07:27.453613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62034 ] 00:12:40.640 [2024-12-06 13:07:27.639193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.919 [2024-12-06 13:07:27.781229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.919 [2024-12-06 13:07:27.781364] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:40.919 [2024-12-06 13:07:27.781394] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:40.919 [2024-12-06 13:07:27.781410] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:41.178 00:12:41.178 real 0m0.717s 00:12:41.178 user 0m0.463s 00:12:41.178 sys 0m0.148s 00:12:41.178 13:07:28 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.178 ************************************ 00:12:41.178 13:07:28 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:41.178 END TEST bdev_json_nonarray 00:12:41.178 ************************************ 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:12:41.178 13:07:28 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:12:41.178 00:12:41.178 real 0m44.710s 00:12:41.178 user 1m7.292s 00:12:41.178 sys 0m7.335s 00:12:41.178 13:07:28 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.178 ************************************ 00:12:41.178 END TEST blockdev_nvme 00:12:41.178 13:07:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:41.178 ************************************ 00:12:41.178 13:07:28 -- spdk/autotest.sh@209 -- # uname -s 00:12:41.178 13:07:28 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:12:41.178 13:07:28 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:41.178 13:07:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.178 13:07:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.178 13:07:28 -- common/autotest_common.sh@10 -- # set +x 00:12:41.178 ************************************ 00:12:41.178 START TEST blockdev_nvme_gpt 00:12:41.178 ************************************ 00:12:41.178 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:41.438 * Looking for test storage... 00:12:41.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:41.438 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:41.438 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:12:41.438 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:41.438 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:12:41.438 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.439 13:07:28 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.439 --rc genhtml_branch_coverage=1 00:12:41.439 --rc genhtml_function_coverage=1 00:12:41.439 --rc genhtml_legend=1 00:12:41.439 --rc geninfo_all_blocks=1 00:12:41.439 --rc geninfo_unexecuted_blocks=1 00:12:41.439 00:12:41.439 ' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.439 --rc genhtml_branch_coverage=1 00:12:41.439 --rc genhtml_function_coverage=1 00:12:41.439 --rc genhtml_legend=1 00:12:41.439 --rc geninfo_all_blocks=1 00:12:41.439 --rc geninfo_unexecuted_blocks=1 00:12:41.439 00:12:41.439 ' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.439 --rc genhtml_branch_coverage=1 00:12:41.439 --rc genhtml_function_coverage=1 00:12:41.439 --rc genhtml_legend=1 00:12:41.439 --rc geninfo_all_blocks=1 00:12:41.439 --rc geninfo_unexecuted_blocks=1 00:12:41.439 00:12:41.439 ' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.439 --rc genhtml_branch_coverage=1 00:12:41.439 --rc genhtml_function_coverage=1 00:12:41.439 --rc genhtml_legend=1 00:12:41.439 --rc geninfo_all_blocks=1 00:12:41.439 --rc geninfo_unexecuted_blocks=1 00:12:41.439 00:12:41.439 ' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62118 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62118 00:12:41.439 13:07:28 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62118 ']' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.439 13:07:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:41.699 [2024-12-06 13:07:28.494565] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:41.699 [2024-12-06 13:07:28.494755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:12:41.699 [2024-12-06 13:07:28.682419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.958 [2024-12-06 13:07:28.818394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.893 13:07:29 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.893 13:07:29 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:12:42.893 13:07:29 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:12:42.893 13:07:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:12:42.893 13:07:29 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:43.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:43.409 Waiting for block devices as requested 00:12:43.409 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.409 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.667 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.667 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:48.946 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:48.946 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:12:48.946 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:12:48.947 BYT; 00:12:48.947 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:12:48.947 BYT; 00:12:48.947 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:48.947 13:07:35 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:48.947 13:07:35 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:12:49.879 The operation has completed successfully. 00:12:49.879 13:07:36 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:12:50.861 The operation has completed successfully. 00:12:50.861 13:07:37 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:51.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:51.990 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.990 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.990 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.990 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:52.248 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:12:52.248 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.248 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:52.248 [] 00:12:52.248 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.248 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:12:52.248 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:12:52.248 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:52.248 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:52.248 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:52.248 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.248 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:12:52.506 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.506 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:52.765 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.765 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:12:52.765 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:12:52.766 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "baa9b416-e2bd-4073-a02d-ff2bb4da5422"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "baa9b416-e2bd-4073-a02d-ff2bb4da5422",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bcf908c0-b692-4078-93fa-d1fa0b7674d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bcf908c0-b692-4078-93fa-d1fa0b7674d3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "97fce8c2-7fd5-4fa9-9d3b-f4a6cba27627"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "97fce8c2-7fd5-4fa9-9d3b-f4a6cba27627",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "afb805c9-870c-46e0-aabb-9592173dfb92"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "afb805c9-870c-46e0-aabb-9592173dfb92",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b70fb85c-9d52-4799-81ff-2493e989c07e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b70fb85c-9d52-4799-81ff-2493e989c07e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:52.766 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:12:52.766 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:12:52.766 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:12:52.766 13:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62118 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62118 ']' 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62118 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62118 00:12:52.766 killing process with pid 62118 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62118' 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62118 00:12:52.766 13:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62118 00:12:55.292 13:07:42 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:55.292 13:07:42 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:55.292 13:07:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:55.292 13:07:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.292 13:07:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:55.292 ************************************ 00:12:55.292 START TEST bdev_hello_world 00:12:55.292 ************************************ 00:12:55.292 13:07:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:55.292 [2024-12-06 13:07:42.141395] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:55.292 [2024-12-06 13:07:42.141645] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62755 ] 00:12:55.550 [2024-12-06 13:07:42.338642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.550 [2024-12-06 13:07:42.499925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.482 [2024-12-06 13:07:43.177913] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:56.482 [2024-12-06 13:07:43.177982] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:56.482 [2024-12-06 13:07:43.178016] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:56.482 [2024-12-06 13:07:43.181265] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:56.482 [2024-12-06 13:07:43.181716] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:56.482 [2024-12-06 13:07:43.181753] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:56.482 [2024-12-06 13:07:43.181927] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:56.482 00:12:56.482 [2024-12-06 13:07:43.181960] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:57.415 00:12:57.415 real 0m2.251s 00:12:57.415 user 0m1.849s 00:12:57.415 sys 0m0.288s 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.415 ************************************ 00:12:57.415 END TEST bdev_hello_world 00:12:57.415 ************************************ 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:57.415 13:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:12:57.415 13:07:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.415 13:07:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.415 13:07:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:57.415 ************************************ 00:12:57.415 START TEST bdev_bounds 00:12:57.415 ************************************ 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62797 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:57.415 Process bdevio pid: 62797 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62797' 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62797 00:12:57.415 13:07:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62797 ']' 00:12:57.416 13:07:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.416 13:07:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.416 13:07:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.416 13:07:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.416 13:07:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:57.416 [2024-12-06 13:07:44.428020] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:57.416 [2024-12-06 13:07:44.428656] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62797 ] 00:12:57.674 [2024-12-06 13:07:44.602589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.932 [2024-12-06 13:07:44.739213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.932 [2024-12-06 13:07:44.739352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.932 [2024-12-06 13:07:44.739365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.495 13:07:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.495 13:07:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:58.495 13:07:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:58.751 I/O targets: 00:12:58.751 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:58.751 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:12:58.751 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:12:58.751 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.751 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.751 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.751 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:58.751 00:12:58.751 00:12:58.751 CUnit - A unit testing framework for C - Version 2.1-3 00:12:58.751 http://cunit.sourceforge.net/ 00:12:58.751 00:12:58.751 00:12:58.751 Suite: bdevio tests on: Nvme3n1 00:12:58.751 Test: blockdev write read block ...passed 00:12:58.751 Test: blockdev write zeroes read block ...passed 00:12:58.751 Test: blockdev write zeroes read no split ...passed 00:12:58.751 Test: blockdev write zeroes read split ...passed 00:12:58.751 Test: blockdev write zeroes read split partial ...passed 00:12:58.751 Test: blockdev reset ...[2024-12-06 13:07:45.604412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:58.751 [2024-12-06 13:07:45.608291] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:58.751 passed 00:12:58.751 Test: blockdev write read 8 blocks ...passed 00:12:58.751 Test: blockdev write read size > 128k ...passed 00:12:58.751 Test: blockdev write read invalid size ...passed 00:12:58.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.751 Test: blockdev write read max offset ...passed 00:12:58.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.751 Test: blockdev writev readv 8 blocks ...passed 00:12:58.751 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.751 Test: blockdev writev readv block ...passed 00:12:58.751 Test: blockdev writev readv size > 128k ...passed 00:12:58.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.751 Test: blockdev comparev and writev ...[2024-12-06 13:07:45.616021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a04000 len:0x1000 00:12:58.751 [2024-12-06 13:07:45.616083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:58.751 passed 00:12:58.751 Test: blockdev nvme passthru rw ...passed 00:12:58.752 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:07:45.616999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:58.752 [2024-12-06 13:07:45.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:58.752 passed 00:12:58.752 Test: blockdev nvme admin passthru ...passed 00:12:58.752 Test: blockdev copy ...passed 00:12:58.752 Suite: bdevio tests on: Nvme2n3 00:12:58.752 Test: blockdev write read block ...passed 00:12:58.752 Test: blockdev write zeroes read block ...passed 00:12:58.752 Test: blockdev write zeroes read no split ...passed 00:12:58.752 Test: blockdev write zeroes read split ...passed 00:12:58.752 Test: blockdev write zeroes read split partial ...passed 00:12:58.752 Test: blockdev reset ...[2024-12-06 13:07:45.684434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:58.752 [2024-12-06 13:07:45.688918] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:58.752 passed 00:12:58.752 Test: blockdev write read 8 blocks ...passed 00:12:58.752 Test: blockdev write read size > 128k ...passed 00:12:58.752 Test: blockdev write read invalid size ...passed 00:12:58.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.752 Test: blockdev write read max offset ...passed 00:12:58.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.752 Test: blockdev writev readv 8 blocks ...passed 00:12:58.752 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.752 Test: blockdev writev readv block ...passed 00:12:58.752 Test: blockdev writev readv size > 128k ...passed 00:12:58.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.752 Test: blockdev comparev and writev ...[2024-12-06 13:07:45.697820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a02000 len:0x1000 00:12:58.752 [2024-12-06 13:07:45.697887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:58.752 passed 00:12:58.752 Test: blockdev nvme passthru rw ...passed 00:12:58.752 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.752 Test: blockdev nvme admin passthru ...[2024-12-06 13:07:45.698670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:58.752 [2024-12-06 13:07:45.698723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:58.752 passed 00:12:58.752 Test: blockdev copy ...passed 00:12:58.752 Suite: bdevio tests on: Nvme2n2 00:12:58.752 Test: blockdev write read block ...passed 00:12:58.752 Test: blockdev write zeroes read block ...passed 00:12:58.752 Test: blockdev write zeroes read no split ...passed 00:12:58.752 Test: blockdev write zeroes read split ...passed 00:12:58.752 Test: blockdev write zeroes read split partial ...passed 00:12:58.752 Test: blockdev reset ...[2024-12-06 13:07:45.762447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:59.009 [2024-12-06 13:07:45.766911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:12:59.009 00:12:59.009 Test: blockdev write read 8 blocks ...passed 00:12:59.009 Test: blockdev write read size > 128k ...passed 00:12:59.009 Test: blockdev write read invalid size ...passed 00:12:59.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.009 Test: blockdev write read max offset ...passed 00:12:59.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.009 Test: blockdev writev readv 8 blocks ...passed 00:12:59.009 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.010 Test: blockdev writev readv block ...passed 00:12:59.010 Test: blockdev writev readv size > 128k ...passed 00:12:59.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.010 Test: blockdev comparev and writev ...[2024-12-06 13:07:45.776121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd838000 len:0x1000 00:12:59.010 [2024-12-06 13:07:45.776197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.010 passed 00:12:59.010 Test: blockdev nvme passthru rw ...passed 00:12:59.010 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.010 Test: blockdev nvme admin passthru ...[2024-12-06 13:07:45.777053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:59.010 [2024-12-06 13:07:45.777095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:59.010 passed 00:12:59.010 Test: blockdev copy ...passed 00:12:59.010 Suite: bdevio tests on: Nvme2n1 00:12:59.010 Test: blockdev write read block ...passed 00:12:59.010 Test: blockdev write zeroes read block ...passed 00:12:59.010 Test: blockdev write zeroes read no split ...passed 00:12:59.010 Test: blockdev write zeroes read split ...passed 00:12:59.010 Test: blockdev write zeroes read split partial ...passed 00:12:59.010 Test: blockdev reset ...[2024-12-06 13:07:45.841384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:59.010 [2024-12-06 13:07:45.845692] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:12:59.010 Test: blockdev write read 8 blocks ...passed 00:12:59.010 Test: blockdev write read size > 128k ...uccessful. 00:12:59.010 passed 00:12:59.010 Test: blockdev write read invalid size ...passed 00:12:59.010 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.010 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.010 Test: blockdev write read max offset ...passed 00:12:59.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.010 Test: blockdev writev readv 8 blocks ...passed 00:12:59.010 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.010 Test: blockdev writev readv block ...passed 00:12:59.010 Test: blockdev writev readv size > 128k ...passed 00:12:59.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.010 Test: blockdev comparev and writev ...[2024-12-06 13:07:45.855751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd834000 len:0x1000 00:12:59.010 [2024-12-06 13:07:45.855827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.010 passed 00:12:59.010 Test: blockdev nvme passthru rw ...passed 00:12:59.010 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:07:45.856561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:59.010 passed 00:12:59.010 Test: blockdev nvme admin passthru ...[2024-12-06 13:07:45.856598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:59.010 passed 00:12:59.010 Test: blockdev copy ...passed 00:12:59.010 Suite: bdevio tests on: Nvme1n1p2 00:12:59.010 Test: blockdev write read block ...passed 00:12:59.010 Test: blockdev write zeroes read block ...passed 00:12:59.010 Test: blockdev write zeroes read no split ...passed 00:12:59.010 Test: blockdev write zeroes read split ...passed 00:12:59.010 Test: blockdev write zeroes read split partial ...passed 00:12:59.010 Test: blockdev reset ...[2024-12-06 13:07:45.935094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:59.010 [2024-12-06 13:07:45.939080] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:59.010 passed 00:12:59.010 Test: blockdev write read 8 blocks ...passed 00:12:59.010 Test: blockdev write read size > 128k ...passed 00:12:59.010 Test: blockdev write read invalid size ...passed 00:12:59.010 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.010 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.010 Test: blockdev write read max offset ...passed 00:12:59.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.010 Test: blockdev writev readv 8 blocks ...passed 00:12:59.010 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.010 Test: blockdev writev readv block ...passed 00:12:59.010 Test: blockdev writev readv size > 128k ...passed 00:12:59.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.010 Test: blockdev comparev and writev ...[2024-12-06 13:07:45.947998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cd830000 len:0x1000 00:12:59.010 [2024-12-06 13:07:45.948058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.010 passed 00:12:59.010 Test: blockdev nvme passthru rw ...passed 00:12:59.010 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.010 Test: blockdev nvme admin passthru ...passed 00:12:59.010 Test: blockdev copy ...passed 00:12:59.010 Suite: bdevio tests on: Nvme1n1p1 00:12:59.010 Test: blockdev write read block ...passed 00:12:59.010 Test: blockdev write zeroes read block ...passed 00:12:59.010 Test: blockdev write zeroes read no split ...passed 00:12:59.010 Test: blockdev write zeroes read split ...passed 00:12:59.010 Test: blockdev write zeroes read split partial ...passed 00:12:59.010 Test: blockdev reset ...[2024-12-06 13:07:46.018544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:59.010 [2024-12-06 13:07:46.022503] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:59.010 passed 00:12:59.268 Test: blockdev write read 8 blocks ...passed 00:12:59.268 Test: blockdev write read size > 128k ...passed 00:12:59.268 Test: blockdev write read invalid size ...passed 00:12:59.268 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.268 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.268 Test: blockdev write read max offset ...passed 00:12:59.268 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.268 Test: blockdev writev readv 8 blocks ...passed 00:12:59.268 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.268 Test: blockdev writev readv block ...passed 00:12:59.268 Test: blockdev writev readv size > 128k ...passed 00:12:59.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.268 Test: blockdev comparev and writev ...[2024-12-06 13:07:46.032044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b9c0e000 len:0x1000 00:12:59.268 [2024-12-06 13:07:46.032105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.268 passed 00:12:59.268 Test: blockdev nvme passthru rw ...passed 00:12:59.268 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.268 Test: blockdev nvme admin passthru ...passed 00:12:59.268 Test: blockdev copy ...passed 00:12:59.268 Suite: bdevio tests on: Nvme0n1 00:12:59.268 Test: blockdev write read block ...passed 00:12:59.268 Test: blockdev write zeroes read block ...passed 00:12:59.268 Test: blockdev write zeroes read no split ...passed 00:12:59.268 Test: blockdev write zeroes read split ...passed 00:12:59.268 Test: blockdev write zeroes read split partial ...passed 00:12:59.268 Test: blockdev reset ...[2024-12-06 13:07:46.100040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:59.268 passed 00:12:59.268 Test: blockdev write read 8 blocks ...[2024-12-06 13:07:46.103805] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:59.268 passed 00:12:59.268 Test: blockdev write read size > 128k ...passed 00:12:59.268 Test: blockdev write read invalid size ...passed 00:12:59.268 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.268 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.268 Test: blockdev write read max offset ...passed 00:12:59.268 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.268 Test: blockdev writev readv 8 blocks ...passed 00:12:59.268 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.268 Test: blockdev writev readv block ...passed 00:12:59.268 Test: blockdev writev readv size > 128k ...passed 00:12:59.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.268 Test: blockdev comparev and writev ...passed 00:12:59.268 Test: blockdev nvme passthru rw ...[2024-12-06 13:07:46.111333] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:59.268 separate metadata which is not supported yet. 00:12:59.268 passed 00:12:59.268 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.268 Test: blockdev nvme admin passthru ...[2024-12-06 13:07:46.112002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:59.268 [2024-12-06 13:07:46.112060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:59.268 passed 00:12:59.268 Test: blockdev copy ...passed 00:12:59.268 00:12:59.268 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.268 suites 7 7 n/a 0 0 00:12:59.268 tests 161 161 161 0 0 00:12:59.268 asserts 1025 1025 1025 0 n/a 00:12:59.268 00:12:59.268 Elapsed time = 1.556 seconds 00:12:59.268 0 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62797 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62797 ']' 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62797 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62797 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.268 killing process with pid 62797 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62797' 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62797 00:12:59.268 13:07:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62797 00:13:00.198 13:07:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:00.198 00:13:00.198 real 0m2.827s 00:13:00.198 user 0m7.264s 00:13:00.198 sys 0m0.432s 00:13:00.198 13:07:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.198 13:07:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:00.198 ************************************ 00:13:00.198 END TEST bdev_bounds 00:13:00.198 ************************************ 00:13:00.198 13:07:47 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:00.198 13:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:00.198 13:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.198 13:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:00.455 ************************************ 00:13:00.455 START TEST bdev_nbd 00:13:00.455 ************************************ 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:00.455 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62862 00:13:00.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62862 /var/tmp/spdk-nbd.sock 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62862 ']' 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.456 13:07:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:00.456 [2024-12-06 13:07:47.338231] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:00.456 [2024-12-06 13:07:47.338642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.712 [2024-12-06 13:07:47.527275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.713 [2024-12-06 13:07:47.661155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:01.642 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.898 1+0 records in 00:13:01.898 1+0 records out 00:13:01.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040568 s, 10.1 MB/s 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.898 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.899 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.899 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.899 13:07:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.899 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.899 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:01.899 13:07:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.156 1+0 records in 00:13:02.156 1+0 records out 00:13:02.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588506 s, 7.0 MB/s 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:02.156 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.412 1+0 records in 00:13:02.412 1+0 records out 00:13:02.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511263 s, 8.0 MB/s 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:02.412 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.977 1+0 records in 00:13:02.977 1+0 records out 00:13:02.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521129 s, 7.9 MB/s 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:02.977 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:02.978 13:07:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:02.978 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:02.978 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.978 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.978 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.978 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:03.235 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:03.235 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.235 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.235 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.235 1+0 records in 00:13:03.235 1+0 records out 00:13:03.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736349 s, 5.6 MB/s 00:13:03.235 13:07:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.235 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:03.235 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.235 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.235 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:03.235 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.235 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:03.235 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.509 1+0 records in 00:13:03.509 1+0 records out 00:13:03.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00186275 s, 2.2 MB/s 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:03.509 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.766 1+0 records in 00:13:03.766 1+0 records out 00:13:03.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062478 s, 6.6 MB/s 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:03.766 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.023 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd0", 00:13:04.023 "bdev_name": "Nvme0n1" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd1", 00:13:04.023 "bdev_name": "Nvme1n1p1" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd2", 00:13:04.023 "bdev_name": "Nvme1n1p2" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd3", 00:13:04.023 "bdev_name": "Nvme2n1" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd4", 00:13:04.023 "bdev_name": "Nvme2n2" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd5", 00:13:04.023 "bdev_name": "Nvme2n3" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd6", 00:13:04.023 "bdev_name": "Nvme3n1" 00:13:04.023 } 00:13:04.023 ]' 00:13:04.023 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:04.023 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd0", 00:13:04.023 "bdev_name": "Nvme0n1" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd1", 00:13:04.023 "bdev_name": "Nvme1n1p1" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd2", 00:13:04.023 "bdev_name": "Nvme1n1p2" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd3", 00:13:04.023 "bdev_name": "Nvme2n1" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd4", 00:13:04.023 "bdev_name": "Nvme2n2" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd5", 00:13:04.023 "bdev_name": "Nvme2n3" 00:13:04.023 }, 00:13:04.023 { 00:13:04.023 "nbd_device": "/dev/nbd6", 00:13:04.023 "bdev_name": "Nvme3n1" 00:13:04.023 } 00:13:04.023 ]' 00:13:04.023 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:04.024 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:13:04.024 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.024 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:13:04.024 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.024 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:04.024 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.024 13:07:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.281 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.845 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.102 13:07:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.359 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.616 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.874 13:07:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.133 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.391 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:06.391 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:06.391 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:06.650 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:06.909 /dev/nbd0 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.909 1+0 records in 00:13:06.909 1+0 records out 00:13:06.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453577 s, 9.0 MB/s 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:06.909 13:07:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:13:07.167 /dev/nbd1 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.167 1+0 records in 00:13:07.167 1+0 records out 00:13:07.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741441 s, 5.5 MB/s 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:07.167 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:13:07.427 /dev/nbd10 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.427 1+0 records in 00:13:07.427 1+0 records out 00:13:07.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804011 s, 5.1 MB/s 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:07.427 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:13:07.685 /dev/nbd11 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.943 1+0 records in 00:13:07.943 1+0 records out 00:13:07.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559281 s, 7.3 MB/s 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:07.943 13:07:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:13:08.201 /dev/nbd12 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.201 1+0 records in 00:13:08.201 1+0 records out 00:13:08.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585378 s, 7.0 MB/s 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:08.201 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:13:08.458 /dev/nbd13 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.458 1+0 records in 00:13:08.458 1+0 records out 00:13:08.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568733 s, 7.2 MB/s 00:13:08.458 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.713 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:08.713 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.713 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:08.713 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:08.713 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.713 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:08.713 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:13:08.970 /dev/nbd14 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.970 1+0 records in 00:13:08.970 1+0 records out 00:13:08.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075584 s, 5.4 MB/s 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.970 13:07:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.228 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd0", 00:13:09.228 "bdev_name": "Nvme0n1" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd1", 00:13:09.228 "bdev_name": "Nvme1n1p1" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd10", 00:13:09.228 "bdev_name": "Nvme1n1p2" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd11", 00:13:09.228 "bdev_name": "Nvme2n1" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd12", 00:13:09.228 "bdev_name": "Nvme2n2" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd13", 00:13:09.228 "bdev_name": "Nvme2n3" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd14", 00:13:09.228 "bdev_name": "Nvme3n1" 00:13:09.228 } 00:13:09.228 ]' 00:13:09.228 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.228 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd0", 00:13:09.228 "bdev_name": "Nvme0n1" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd1", 00:13:09.228 "bdev_name": "Nvme1n1p1" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd10", 00:13:09.228 "bdev_name": "Nvme1n1p2" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd11", 00:13:09.228 "bdev_name": "Nvme2n1" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd12", 00:13:09.228 "bdev_name": "Nvme2n2" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd13", 00:13:09.228 "bdev_name": "Nvme2n3" 00:13:09.228 }, 00:13:09.228 { 00:13:09.228 "nbd_device": "/dev/nbd14", 00:13:09.228 "bdev_name": "Nvme3n1" 00:13:09.228 } 00:13:09.228 ]' 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:09.486 /dev/nbd1 00:13:09.486 /dev/nbd10 00:13:09.486 /dev/nbd11 00:13:09.486 /dev/nbd12 00:13:09.486 /dev/nbd13 00:13:09.486 /dev/nbd14' 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:09.486 /dev/nbd1 00:13:09.486 /dev/nbd10 00:13:09.486 /dev/nbd11 00:13:09.486 /dev/nbd12 00:13:09.486 /dev/nbd13 00:13:09.486 /dev/nbd14' 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:09.486 256+0 records in 00:13:09.486 256+0 records out 00:13:09.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00740968 s, 142 MB/s 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:09.486 256+0 records in 00:13:09.486 256+0 records out 00:13:09.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15769 s, 6.6 MB/s 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.486 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:09.743 256+0 records in 00:13:09.743 256+0 records out 00:13:09.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157553 s, 6.7 MB/s 00:13:09.743 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.743 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:09.743 256+0 records in 00:13:09.743 256+0 records out 00:13:09.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153926 s, 6.8 MB/s 00:13:10.001 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.001 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:10.001 256+0 records in 00:13:10.001 256+0 records out 00:13:10.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163713 s, 6.4 MB/s 00:13:10.001 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.001 13:07:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:10.258 256+0 records in 00:13:10.258 256+0 records out 00:13:10.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166596 s, 6.3 MB/s 00:13:10.258 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.258 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:10.258 256+0 records in 00:13:10.258 256+0 records out 00:13:10.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143932 s, 7.3 MB/s 00:13:10.258 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.258 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:10.516 256+0 records in 00:13:10.516 256+0 records out 00:13:10.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167801 s, 6.2 MB/s 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.516 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.845 13:07:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.129 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.388 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.648 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:11.906 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:11.906 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:11.906 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:11.906 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.906 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.906 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:12.169 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.169 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.169 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.169 13:07:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.428 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.686 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:12.943 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:12.944 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:12.944 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.944 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:12.944 13:07:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:13.201 malloc_lvol_verify 00:13:13.201 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:13.459 e98e01fd-6ad3-4f82-86b7-884014841eac 00:13:13.459 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:13.716 2ae38dc0-e4e9-4555-abf4-e42c53c7c906 00:13:13.717 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:13.974 /dev/nbd0 00:13:14.257 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:14.257 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:14.257 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:14.257 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:14.257 13:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:14.257 mke2fs 1.47.0 (5-Feb-2023) 00:13:14.257 Discarding device blocks: 0/4096 done 00:13:14.257 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:14.257 00:13:14.257 Allocating group tables: 0/1 done 00:13:14.257 Writing inode tables: 0/1 done 00:13:14.257 Creating journal (1024 blocks): done 00:13:14.257 Writing superblocks and filesystem accounting information: 0/1 done 00:13:14.257 00:13:14.257 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:14.257 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.257 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:14.257 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.257 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:14.257 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.257 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62862 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62862 ']' 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62862 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62862 00:13:14.515 killing process with pid 62862 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62862' 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62862 00:13:14.515 13:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62862 00:13:15.446 13:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:15.446 00:13:15.446 real 0m15.224s 00:13:15.446 user 0m21.993s 00:13:15.446 sys 0m4.784s 00:13:15.446 13:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.446 ************************************ 00:13:15.446 END TEST bdev_nbd 00:13:15.446 ************************************ 00:13:15.446 13:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:15.702 13:08:02 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:13:15.702 13:08:02 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:13:15.702 13:08:02 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:13:15.702 skipping fio tests on NVMe due to multi-ns failures. 00:13:15.702 13:08:02 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:15.702 13:08:02 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:15.702 13:08:02 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:15.702 13:08:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:15.702 13:08:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.702 13:08:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:15.702 ************************************ 00:13:15.702 START TEST bdev_verify 00:13:15.702 ************************************ 00:13:15.702 13:08:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:15.702 [2024-12-06 13:08:02.610397] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:15.702 [2024-12-06 13:08:02.610600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63319 ] 00:13:15.959 [2024-12-06 13:08:02.798003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.959 [2024-12-06 13:08:02.930736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.959 [2024-12-06 13:08:02.930741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.893 Running I/O for 5 seconds... 00:13:19.323 18304.00 IOPS, 71.50 MiB/s [2024-12-06T13:08:07.273Z] 17984.00 IOPS, 70.25 MiB/s [2024-12-06T13:08:08.208Z] 18069.33 IOPS, 70.58 MiB/s [2024-12-06T13:08:09.140Z] 18208.00 IOPS, 71.12 MiB/s [2024-12-06T13:08:09.140Z] 18265.60 IOPS, 71.35 MiB/s 00:13:22.124 Latency(us) 00:13:22.124 [2024-12-06T13:08:09.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.125 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x0 length 0xbd0bd 00:13:22.125 Nvme0n1 : 5.09 1332.75 5.21 0.00 0.00 95817.89 17277.67 93895.21 00:13:22.125 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:22.125 Nvme0n1 : 5.08 1234.79 4.82 0.00 0.00 103371.57 22163.08 104857.60 00:13:22.125 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x0 length 0x4ff80 00:13:22.125 Nvme1n1p1 : 5.09 1332.31 5.20 0.00 0.00 95636.22 17039.36 88652.33 00:13:22.125 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:22.125 Nvme1n1p1 : 5.08 1234.35 4.82 0.00 0.00 103186.01 21805.61 95801.72 00:13:22.125 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x0 length 0x4ff7f 00:13:22.125 Nvme1n1p2 : 5.09 1331.87 5.20 0.00 0.00 95441.70 16920.20 84839.33 00:13:22.125 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:22.125 Nvme1n1p2 : 5.08 1233.95 4.82 0.00 0.00 102978.38 20971.52 96754.97 00:13:22.125 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x0 length 0x80000 00:13:22.125 Nvme2n1 : 5.10 1331.46 5.20 0.00 0.00 95285.80 17158.52 85315.96 00:13:22.125 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x80000 length 0x80000 00:13:22.125 Nvme2n1 : 5.08 1233.55 4.82 0.00 0.00 102733.69 20852.36 103904.35 00:13:22.125 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x0 length 0x80000 00:13:22.125 Nvme2n2 : 5.10 1330.95 5.20 0.00 0.00 95115.62 17754.30 88652.33 00:13:22.125 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x80000 length 0x80000 00:13:22.125 Nvme2n2 : 5.09 1233.14 4.82 0.00 0.00 102545.13 20733.21 105334.23 00:13:22.125 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x0 length 0x80000 00:13:22.125 Nvme2n3 : 5.10 1330.56 5.20 0.00 0.00 94931.07 17158.52 90082.21 00:13:22.125 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x80000 length 0x80000 00:13:22.125 Nvme2n3 : 5.09 1232.71 4.82 0.00 0.00 102344.48 20256.58 106287.48 00:13:22.125 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x0 length 0x20000 00:13:22.125 Nvme3n1 : 5.10 1330.17 5.20 0.00 0.00 94734.07 12928.47 91035.46 00:13:22.125 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.125 Verification LBA range: start 0x20000 length 0x20000 00:13:22.125 Nvme3n1 : 5.10 1243.39 4.86 0.00 0.00 101311.93 2442.71 106287.48 00:13:22.125 [2024-12-06T13:08:09.141Z] =================================================================================================================== 00:13:22.125 [2024-12-06T13:08:09.141Z] Total : 17965.95 70.18 0.00 0.00 98817.00 2442.71 106287.48 00:13:23.500 00:13:23.500 real 0m7.649s 00:13:23.500 user 0m14.065s 00:13:23.500 sys 0m0.314s 00:13:23.500 13:08:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.500 ************************************ 00:13:23.500 END TEST bdev_verify 00:13:23.500 ************************************ 00:13:23.500 13:08:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:23.500 13:08:10 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:23.500 13:08:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:23.500 13:08:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.500 13:08:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:23.500 ************************************ 00:13:23.500 START TEST bdev_verify_big_io 00:13:23.500 ************************************ 00:13:23.500 13:08:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:23.500 [2024-12-06 13:08:10.308532] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:23.500 [2024-12-06 13:08:10.308698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63423 ] 00:13:23.500 [2024-12-06 13:08:10.485283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:23.760 [2024-12-06 13:08:10.613208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.760 [2024-12-06 13:08:10.613222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.704 Running I/O for 5 seconds... 00:13:30.341 2126.00 IOPS, 132.88 MiB/s [2024-12-06T13:08:17.614Z] 3508.00 IOPS, 219.25 MiB/s 00:13:30.598 Latency(us) 00:13:30.598 [2024-12-06T13:08:17.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.598 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x0 length 0xbd0b 00:13:30.598 Nvme0n1 : 5.78 110.79 6.92 0.00 0.00 1098578.57 22758.87 1197283.14 00:13:30.598 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:30.598 Nvme0n1 : 5.78 116.42 7.28 0.00 0.00 1053436.62 20852.36 1067641.02 00:13:30.598 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x0 length 0x4ff8 00:13:30.598 Nvme1n1p1 : 5.89 112.80 7.05 0.00 0.00 1052947.10 92465.34 1372681.31 00:13:30.598 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x4ff8 length 0x4ff8 00:13:30.598 Nvme1n1p1 : 5.72 115.64 7.23 0.00 0.00 1037372.42 96754.97 907494.87 00:13:30.598 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x0 length 0x4ff7 00:13:30.598 Nvme1n1p2 : 5.89 112.77 7.05 0.00 0.00 1019292.25 93418.59 1395559.33 00:13:30.598 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x4ff7 length 0x4ff7 00:13:30.598 Nvme1n1p2 : 5.78 121.71 7.61 0.00 0.00 972243.99 55765.18 937998.89 00:13:30.598 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x0 length 0x8000 00:13:30.598 Nvme2n1 : 5.94 110.52 6.91 0.00 0.00 1012277.74 87699.08 2013265.92 00:13:30.598 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x8000 length 0x8000 00:13:30.598 Nvme2n1 : 5.79 121.64 7.60 0.00 0.00 946206.80 57909.99 1204909.15 00:13:30.598 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x0 length 0x8000 00:13:30.598 Nvme2n2 : 5.97 120.69 7.54 0.00 0.00 913062.74 13464.67 2043769.95 00:13:30.598 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x8000 length 0x8000 00:13:30.598 Nvme2n2 : 5.85 125.74 7.86 0.00 0.00 892645.81 59339.87 1220161.16 00:13:30.598 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x0 length 0x8000 00:13:30.598 Nvme2n3 : 6.00 125.60 7.85 0.00 0.00 849793.08 13166.78 1776859.69 00:13:30.598 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x8000 length 0x8000 00:13:30.598 Nvme2n3 : 5.94 132.99 8.31 0.00 0.00 822583.14 46470.98 1044763.00 00:13:30.598 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x0 length 0x2000 00:13:30.598 Nvme3n1 : 6.07 166.48 10.41 0.00 0.00 628402.68 233.66 1570957.50 00:13:30.598 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.598 Verification LBA range: start 0x2000 length 0x2000 00:13:30.598 Nvme3n1 : 5.96 145.37 9.09 0.00 0.00 735654.07 3708.74 1235413.18 00:13:30.598 [2024-12-06T13:08:17.614Z] =================================================================================================================== 00:13:30.598 [2024-12-06T13:08:17.614Z] Total : 1739.17 108.70 0.00 0.00 914382.47 233.66 2043769.95 00:13:32.493 00:13:32.493 real 0m9.122s 00:13:32.493 user 0m16.944s 00:13:32.493 sys 0m0.378s 00:13:32.493 13:08:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.493 13:08:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.493 ************************************ 00:13:32.493 END TEST bdev_verify_big_io 00:13:32.493 ************************************ 00:13:32.493 13:08:19 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.493 13:08:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:32.493 13:08:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.493 13:08:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:32.493 ************************************ 00:13:32.493 START TEST bdev_write_zeroes 00:13:32.493 ************************************ 00:13:32.493 13:08:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.493 [2024-12-06 13:08:19.455823] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:32.493 [2024-12-06 13:08:19.455966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63542 ] 00:13:32.751 [2024-12-06 13:08:19.635635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.008 [2024-12-06 13:08:19.788485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.651 Running I/O for 1 seconds... 00:13:34.584 51968.00 IOPS, 203.00 MiB/s 00:13:34.584 Latency(us) 00:13:34.584 [2024-12-06T13:08:21.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.584 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.584 Nvme0n1 : 1.03 7409.25 28.94 0.00 0.00 17232.44 8579.26 31695.59 00:13:34.584 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.584 Nvme1n1p1 : 1.03 7399.72 28.91 0.00 0.00 17220.85 12749.73 27644.28 00:13:34.584 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.584 Nvme1n1p2 : 1.03 7390.06 28.87 0.00 0.00 17181.54 12392.26 26452.71 00:13:34.584 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.584 Nvme2n1 : 1.03 7381.28 28.83 0.00 0.00 17123.65 9115.46 25737.77 00:13:34.584 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.584 Nvme2n2 : 1.03 7372.52 28.80 0.00 0.00 17117.10 9055.88 25618.62 00:13:34.584 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.584 Nvme2n3 : 1.03 7363.75 28.76 0.00 0.00 17107.93 8698.41 26571.87 00:13:34.584 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.584 Nvme3n1 : 1.04 7354.92 28.73 0.00 0.00 17097.78 8340.95 28240.06 00:13:34.584 [2024-12-06T13:08:21.600Z] =================================================================================================================== 00:13:34.584 [2024-12-06T13:08:21.600Z] Total : 51671.50 201.84 0.00 0.00 17154.47 8340.95 31695.59 00:13:35.957 00:13:35.957 real 0m3.331s 00:13:35.957 user 0m2.915s 00:13:35.957 sys 0m0.289s 00:13:35.957 13:08:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.957 13:08:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:35.957 ************************************ 00:13:35.957 END TEST bdev_write_zeroes 00:13:35.957 ************************************ 00:13:35.957 13:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.957 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:35.957 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.957 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:35.957 ************************************ 00:13:35.957 START TEST bdev_json_nonenclosed 00:13:35.957 ************************************ 00:13:35.957 13:08:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.957 [2024-12-06 13:08:22.852868] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:35.957 [2024-12-06 13:08:22.853021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63596 ] 00:13:36.214 [2024-12-06 13:08:23.027902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.214 [2024-12-06 13:08:23.159799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.214 [2024-12-06 13:08:23.159915] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:36.214 [2024-12-06 13:08:23.159945] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.214 [2024-12-06 13:08:23.159961] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.472 00:13:36.472 real 0m0.670s 00:13:36.472 user 0m0.427s 00:13:36.472 sys 0m0.137s 00:13:36.472 13:08:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.472 ************************************ 00:13:36.472 13:08:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:36.472 END TEST bdev_json_nonenclosed 00:13:36.472 ************************************ 00:13:36.472 13:08:23 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.472 13:08:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:36.472 13:08:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.472 13:08:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:36.472 ************************************ 00:13:36.472 START TEST bdev_json_nonarray 00:13:36.472 ************************************ 00:13:36.472 13:08:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.729 [2024-12-06 13:08:23.589910] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:36.729 [2024-12-06 13:08:23.590093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63622 ] 00:13:36.987 [2024-12-06 13:08:23.774568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.987 [2024-12-06 13:08:23.908209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.987 [2024-12-06 13:08:23.908396] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:36.987 [2024-12-06 13:08:23.908444] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.987 [2024-12-06 13:08:23.908469] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:37.245 00:13:37.245 real 0m0.701s 00:13:37.245 user 0m0.442s 00:13:37.245 sys 0m0.153s 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:37.245 ************************************ 00:13:37.245 END TEST bdev_json_nonarray 00:13:37.245 ************************************ 00:13:37.245 13:08:24 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:13:37.245 13:08:24 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:13:37.245 13:08:24 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:13:37.245 13:08:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.245 13:08:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.245 13:08:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:37.245 ************************************ 00:13:37.245 START TEST bdev_gpt_uuid 00:13:37.245 ************************************ 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63647 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63647 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63647 ']' 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.245 13:08:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:37.504 [2024-12-06 13:08:24.361814] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:37.504 [2024-12-06 13:08:24.362004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63647 ] 00:13:37.762 [2024-12-06 13:08:24.548861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.762 [2024-12-06 13:08:24.676389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.702 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.702 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:13:38.702 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:38.703 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.703 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:38.960 Some configs were skipped because the RPC state that can call them passed over. 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:13:38.960 { 00:13:38.960 "name": "Nvme1n1p1", 00:13:38.960 "aliases": [ 00:13:38.960 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:13:38.960 ], 00:13:38.960 "product_name": "GPT Disk", 00:13:38.960 "block_size": 4096, 00:13:38.960 "num_blocks": 655104, 00:13:38.960 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:38.960 "assigned_rate_limits": { 00:13:38.960 "rw_ios_per_sec": 0, 00:13:38.960 "rw_mbytes_per_sec": 0, 00:13:38.960 "r_mbytes_per_sec": 0, 00:13:38.960 "w_mbytes_per_sec": 0 00:13:38.960 }, 00:13:38.960 "claimed": false, 00:13:38.960 "zoned": false, 00:13:38.960 "supported_io_types": { 00:13:38.960 "read": true, 00:13:38.960 "write": true, 00:13:38.960 "unmap": true, 00:13:38.960 "flush": true, 00:13:38.960 "reset": true, 00:13:38.960 "nvme_admin": false, 00:13:38.960 "nvme_io": false, 00:13:38.960 "nvme_io_md": false, 00:13:38.960 "write_zeroes": true, 00:13:38.960 "zcopy": false, 00:13:38.960 "get_zone_info": false, 00:13:38.960 "zone_management": false, 00:13:38.960 "zone_append": false, 00:13:38.960 "compare": true, 00:13:38.960 "compare_and_write": false, 00:13:38.960 "abort": true, 00:13:38.960 "seek_hole": false, 00:13:38.960 "seek_data": false, 00:13:38.960 "copy": true, 00:13:38.960 "nvme_iov_md": false 00:13:38.960 }, 00:13:38.960 "driver_specific": { 00:13:38.960 "gpt": { 00:13:38.960 "base_bdev": "Nvme1n1", 00:13:38.960 "offset_blocks": 256, 00:13:38.960 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:13:38.960 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:38.960 "partition_name": "SPDK_TEST_first" 00:13:38.960 } 00:13:38.960 } 00:13:38.960 } 00:13:38.960 ]' 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:13:38.960 13:08:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:13:39.218 { 00:13:39.218 "name": "Nvme1n1p2", 00:13:39.218 "aliases": [ 00:13:39.218 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:13:39.218 ], 00:13:39.218 "product_name": "GPT Disk", 00:13:39.218 "block_size": 4096, 00:13:39.218 "num_blocks": 655103, 00:13:39.218 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:39.218 "assigned_rate_limits": { 00:13:39.218 "rw_ios_per_sec": 0, 00:13:39.218 "rw_mbytes_per_sec": 0, 00:13:39.218 "r_mbytes_per_sec": 0, 00:13:39.218 "w_mbytes_per_sec": 0 00:13:39.218 }, 00:13:39.218 "claimed": false, 00:13:39.218 "zoned": false, 00:13:39.218 "supported_io_types": { 00:13:39.218 "read": true, 00:13:39.218 "write": true, 00:13:39.218 "unmap": true, 00:13:39.218 "flush": true, 00:13:39.218 "reset": true, 00:13:39.218 "nvme_admin": false, 00:13:39.218 "nvme_io": false, 00:13:39.218 "nvme_io_md": false, 00:13:39.218 "write_zeroes": true, 00:13:39.218 "zcopy": false, 00:13:39.218 "get_zone_info": false, 00:13:39.218 "zone_management": false, 00:13:39.218 "zone_append": false, 00:13:39.218 "compare": true, 00:13:39.218 "compare_and_write": false, 00:13:39.218 "abort": true, 00:13:39.218 "seek_hole": false, 00:13:39.218 "seek_data": false, 00:13:39.218 "copy": true, 00:13:39.218 "nvme_iov_md": false 00:13:39.218 }, 00:13:39.218 "driver_specific": { 00:13:39.218 "gpt": { 00:13:39.218 "base_bdev": "Nvme1n1", 00:13:39.218 "offset_blocks": 655360, 00:13:39.218 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:13:39.218 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:39.218 "partition_name": "SPDK_TEST_second" 00:13:39.218 } 00:13:39.218 } 00:13:39.218 } 00:13:39.218 ]' 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:39.218 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63647 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63647 ']' 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63647 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63647 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.476 killing process with pid 63647 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63647' 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63647 00:13:39.476 13:08:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63647 00:13:42.074 00:13:42.074 real 0m4.279s 00:13:42.074 user 0m4.577s 00:13:42.074 sys 0m0.578s 00:13:42.074 13:08:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.074 13:08:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:42.074 ************************************ 00:13:42.074 END TEST bdev_gpt_uuid 00:13:42.074 ************************************ 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:13:42.074 13:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:42.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:42.074 Waiting for block devices as requested 00:13:42.331 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.331 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.331 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.588 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.849 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:47.849 13:08:34 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:13:47.849 13:08:34 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:13:47.849 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:47.849 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:47.849 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:47.849 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:47.849 13:08:34 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:13:47.849 00:13:47.849 real 1m6.531s 00:13:47.849 user 1m25.582s 00:13:47.849 sys 0m10.643s 00:13:47.849 13:08:34 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.849 13:08:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 ************************************ 00:13:47.849 END TEST blockdev_nvme_gpt 00:13:47.849 ************************************ 00:13:47.849 13:08:34 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:47.849 13:08:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.849 13:08:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.849 13:08:34 -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 ************************************ 00:13:47.849 START TEST nvme 00:13:47.849 ************************************ 00:13:47.849 13:08:34 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:47.849 * Looking for test storage... 00:13:47.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:47.849 13:08:34 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:47.849 13:08:34 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:47.849 13:08:34 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:48.107 13:08:34 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.107 13:08:34 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.107 13:08:34 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.107 13:08:34 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.107 13:08:34 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.107 13:08:34 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.107 13:08:34 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.107 13:08:34 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.107 13:08:34 nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:48.107 13:08:34 nvme -- scripts/common.sh@345 -- # : 1 00:13:48.107 13:08:34 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.107 13:08:34 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.107 13:08:34 nvme -- scripts/common.sh@365 -- # decimal 1 00:13:48.107 13:08:34 nvme -- scripts/common.sh@353 -- # local d=1 00:13:48.107 13:08:34 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.107 13:08:34 nvme -- scripts/common.sh@355 -- # echo 1 00:13:48.107 13:08:34 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.107 13:08:34 nvme -- scripts/common.sh@366 -- # decimal 2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@353 -- # local d=2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.107 13:08:34 nvme -- scripts/common.sh@355 -- # echo 2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.107 13:08:34 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.107 13:08:34 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.107 13:08:34 nvme -- scripts/common.sh@368 -- # return 0 00:13:48.107 13:08:34 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.107 13:08:34 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:48.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.107 --rc genhtml_branch_coverage=1 00:13:48.107 --rc genhtml_function_coverage=1 00:13:48.107 --rc genhtml_legend=1 00:13:48.107 --rc geninfo_all_blocks=1 00:13:48.107 --rc geninfo_unexecuted_blocks=1 00:13:48.107 00:13:48.107 ' 00:13:48.107 13:08:34 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:48.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.107 --rc genhtml_branch_coverage=1 00:13:48.107 --rc genhtml_function_coverage=1 00:13:48.107 --rc genhtml_legend=1 00:13:48.107 --rc geninfo_all_blocks=1 00:13:48.107 --rc geninfo_unexecuted_blocks=1 00:13:48.107 00:13:48.107 ' 00:13:48.107 13:08:34 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:48.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.107 --rc genhtml_branch_coverage=1 00:13:48.107 --rc genhtml_function_coverage=1 00:13:48.107 --rc genhtml_legend=1 00:13:48.107 --rc geninfo_all_blocks=1 00:13:48.107 --rc geninfo_unexecuted_blocks=1 00:13:48.107 00:13:48.107 ' 00:13:48.107 13:08:34 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:48.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.107 --rc genhtml_branch_coverage=1 00:13:48.107 --rc genhtml_function_coverage=1 00:13:48.107 --rc genhtml_legend=1 00:13:48.107 --rc geninfo_all_blocks=1 00:13:48.107 --rc geninfo_unexecuted_blocks=1 00:13:48.107 00:13:48.107 ' 00:13:48.107 13:08:34 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:48.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.239 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.239 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.239 13:08:36 nvme -- nvme/nvme.sh@79 -- # uname 00:13:49.239 13:08:36 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:13:49.239 13:08:36 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:13:49.239 13:08:36 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1075 -- # stubpid=64299 00:13:49.239 Waiting for stub to ready for secondary processes... 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64299 ]] 00:13:49.239 13:08:36 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:49.239 [2024-12-06 13:08:36.250644] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:49.239 [2024-12-06 13:08:36.250841] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:13:50.613 13:08:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:50.613 13:08:37 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64299 ]] 00:13:50.613 13:08:37 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:50.613 [2024-12-06 13:08:37.560762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.871 [2024-12-06 13:08:37.705686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.871 [2024-12-06 13:08:37.705803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.871 [2024-12-06 13:08:37.705804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.871 [2024-12-06 13:08:37.728100] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:13:50.871 [2024-12-06 13:08:37.728185] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:50.871 [2024-12-06 13:08:37.741103] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:50.871 [2024-12-06 13:08:37.741332] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:50.871 [2024-12-06 13:08:37.746443] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:50.871 [2024-12-06 13:08:37.747236] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:13:50.871 [2024-12-06 13:08:37.747350] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:13:50.871 [2024-12-06 13:08:37.750434] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:50.871 [2024-12-06 13:08:37.750630] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:13:50.871 [2024-12-06 13:08:37.750707] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:13:50.871 [2024-12-06 13:08:37.753058] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:50.871 [2024-12-06 13:08:37.753269] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:13:50.871 [2024-12-06 13:08:37.753339] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:13:50.871 [2024-12-06 13:08:37.753394] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:13:50.871 [2024-12-06 13:08:37.753442] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:13:51.437 done. 00:13:51.437 13:08:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:51.437 13:08:38 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:13:51.437 13:08:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:51.437 13:08:38 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:13:51.437 13:08:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.437 13:08:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.437 ************************************ 00:13:51.437 START TEST nvme_reset 00:13:51.437 ************************************ 00:13:51.437 13:08:38 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:51.695 Initializing NVMe Controllers 00:13:51.695 Skipping QEMU NVMe SSD at 0000:00:10.0 00:13:51.695 Skipping QEMU NVMe SSD at 0000:00:11.0 00:13:51.695 Skipping QEMU NVMe SSD at 0000:00:13.0 00:13:51.695 Skipping QEMU NVMe SSD at 0000:00:12.0 00:13:51.695 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:13:51.695 ************************************ 00:13:51.695 END TEST nvme_reset 00:13:51.695 ************************************ 00:13:51.695 00:13:51.695 real 0m0.309s 00:13:51.695 user 0m0.115s 00:13:51.695 sys 0m0.151s 00:13:51.695 13:08:38 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.695 13:08:38 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:13:51.695 13:08:38 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:13:51.695 13:08:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.695 13:08:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.695 13:08:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.695 ************************************ 00:13:51.695 START TEST nvme_identify 00:13:51.695 ************************************ 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:13:51.695 13:08:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:13:51.695 13:08:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:13:51.695 13:08:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:13:51.695 13:08:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:51.695 13:08:38 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:51.695 13:08:38 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:13:51.957 [2024-12-06 13:08:38.959208] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64332 terminated unexpected 00:13:51.957 ===================================================== 00:13:51.957 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:51.957 ===================================================== 00:13:51.957 Controller Capabilities/Features 00:13:51.957 ================================ 00:13:51.957 Vendor ID: 1b36 00:13:51.957 Subsystem Vendor ID: 1af4 00:13:51.957 Serial Number: 12340 00:13:51.957 Model Number: QEMU NVMe Ctrl 00:13:51.957 Firmware Version: 8.0.0 00:13:51.957 Recommended Arb Burst: 6 00:13:51.957 IEEE OUI Identifier: 00 54 52 00:13:51.957 Multi-path I/O 00:13:51.957 May have multiple subsystem ports: No 00:13:51.957 May have multiple controllers: No 00:13:51.957 Associated with SR-IOV VF: No 00:13:51.957 Max Data Transfer Size: 524288 00:13:51.957 Max Number of Namespaces: 256 00:13:51.957 Max Number of I/O Queues: 64 00:13:51.957 NVMe Specification Version (VS): 1.4 00:13:51.957 NVMe Specification Version (Identify): 1.4 00:13:51.957 Maximum Queue Entries: 2048 00:13:51.957 Contiguous Queues Required: Yes 00:13:51.957 Arbitration Mechanisms Supported 00:13:51.957 Weighted Round Robin: Not Supported 00:13:51.957 Vendor Specific: Not Supported 00:13:51.957 Reset Timeout: 7500 ms 00:13:51.957 Doorbell Stride: 4 bytes 00:13:51.957 NVM Subsystem Reset: Not Supported 00:13:51.957 Command Sets Supported 00:13:51.957 NVM Command Set: Supported 00:13:51.957 Boot Partition: Not Supported 00:13:51.957 Memory Page Size Minimum: 4096 bytes 00:13:51.957 Memory Page Size Maximum: 65536 bytes 00:13:51.957 Persistent Memory Region: Not Supported 00:13:51.957 Optional Asynchronous Events Supported 00:13:51.957 Namespace Attribute Notices: Supported 00:13:51.957 Firmware Activation Notices: Not Supported 00:13:51.957 ANA Change Notices: Not Supported 00:13:51.957 PLE Aggregate Log Change Notices: Not Supported 00:13:51.957 LBA Status Info Alert Notices: Not Supported 00:13:51.957 EGE Aggregate Log Change Notices: Not Supported 00:13:51.957 Normal NVM Subsystem Shutdown event: Not Supported 00:13:51.957 Zone Descriptor Change Notices: Not Supported 00:13:51.957 Discovery Log Change Notices: Not Supported 00:13:51.957 Controller Attributes 00:13:51.957 128-bit Host Identifier: Not Supported 00:13:51.957 Non-Operational Permissive Mode: Not Supported 00:13:51.957 NVM Sets: Not Supported 00:13:51.957 Read Recovery Levels: Not Supported 00:13:51.957 Endurance Groups: Not Supported 00:13:51.957 Predictable Latency Mode: Not Supported 00:13:51.957 Traffic Based Keep ALive: Not Supported 00:13:51.957 Namespace Granularity: Not Supported 00:13:51.957 SQ Associations: Not Supported 00:13:51.957 UUID List: Not Supported 00:13:51.957 Multi-Domain Subsystem: Not Supported 00:13:51.957 Fixed Capacity Management: Not Supported 00:13:51.957 Variable Capacity Management: Not Supported 00:13:51.957 Delete Endurance Group: Not Supported 00:13:51.957 Delete NVM Set: Not Supported 00:13:51.957 Extended LBA Formats Supported: Supported 00:13:51.957 Flexible Data Placement Supported: Not Supported 00:13:51.957 00:13:51.957 Controller Memory Buffer Support 00:13:51.957 ================================ 00:13:51.957 Supported: No 00:13:51.957 00:13:51.957 Persistent Memory Region Support 00:13:51.957 ================================ 00:13:51.957 Supported: No 00:13:51.957 00:13:51.957 Admin Command Set Attributes 00:13:51.957 ============================ 00:13:51.957 Security Send/Receive: Not Supported 00:13:51.957 Format NVM: Supported 00:13:51.957 Firmware Activate/Download: Not Supported 00:13:51.957 Namespace Management: Supported 00:13:51.957 Device Self-Test: Not Supported 00:13:51.957 Directives: Supported 00:13:51.957 NVMe-MI: Not Supported 00:13:51.957 Virtualization Management: Not Supported 00:13:51.957 Doorbell Buffer Config: Supported 00:13:51.957 Get LBA Status Capability: Not Supported 00:13:51.957 Command & Feature Lockdown Capability: Not Supported 00:13:51.957 Abort Command Limit: 4 00:13:51.957 Async Event Request Limit: 4 00:13:51.957 Number of Firmware Slots: N/A 00:13:51.957 Firmware Slot 1 Read-Only: N/A 00:13:51.957 Firmware Activation Without Reset: N/A 00:13:51.957 Multiple Update Detection Support: N/A 00:13:51.957 Firmware Update Granularity: No Information Provided 00:13:51.957 Per-Namespace SMART Log: Yes 00:13:51.957 Asymmetric Namespace Access Log Page: Not Supported 00:13:51.957 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:51.957 Command Effects Log Page: Supported 00:13:51.957 Get Log Page Extended Data: Supported 00:13:51.957 Telemetry Log Pages: Not Supported 00:13:51.957 Persistent Event Log Pages: Not Supported 00:13:51.957 Supported Log Pages Log Page: May Support 00:13:51.957 Commands Supported & Effects Log Page: Not Supported 00:13:51.957 Feature Identifiers & Effects Log Page:May Support 00:13:51.957 NVMe-MI Commands & Effects Log Page: May Support 00:13:51.957 Data Area 4 for Telemetry Log: Not Supported 00:13:51.957 Error Log Page Entries Supported: 1 00:13:51.957 Keep Alive: Not Supported 00:13:51.957 00:13:51.957 NVM Command Set Attributes 00:13:51.957 ========================== 00:13:51.957 Submission Queue Entry Size 00:13:51.957 Max: 64 00:13:51.957 Min: 64 00:13:51.957 Completion Queue Entry Size 00:13:51.957 Max: 16 00:13:51.957 Min: 16 00:13:51.957 Number of Namespaces: 256 00:13:51.957 Compare Command: Supported 00:13:51.957 Write Uncorrectable Command: Not Supported 00:13:51.957 Dataset Management Command: Supported 00:13:51.957 Write Zeroes Command: Supported 00:13:51.957 Set Features Save Field: Supported 00:13:51.957 Reservations: Not Supported 00:13:51.957 Timestamp: Supported 00:13:51.957 Copy: Supported 00:13:51.957 Volatile Write Cache: Present 00:13:51.957 Atomic Write Unit (Normal): 1 00:13:51.957 Atomic Write Unit (PFail): 1 00:13:51.957 Atomic Compare & Write Unit: 1 00:13:51.957 Fused Compare & Write: Not Supported 00:13:51.957 Scatter-Gather List 00:13:51.957 SGL Command Set: Supported 00:13:51.957 SGL Keyed: Not Supported 00:13:51.957 SGL Bit Bucket Descriptor: Not Supported 00:13:51.957 SGL Metadata Pointer: Not Supported 00:13:51.957 Oversized SGL: Not Supported 00:13:51.957 SGL Metadata Address: Not Supported 00:13:51.957 SGL Offset: Not Supported 00:13:51.957 Transport SGL Data Block: Not Supported 00:13:51.957 Replay Protected Memory Block: Not Supported 00:13:51.957 00:13:51.957 Firmware Slot Information 00:13:51.957 ========================= 00:13:51.957 Active slot: 1 00:13:51.957 Slot 1 Firmware Revision: 1.0 00:13:51.957 00:13:51.957 00:13:51.957 Commands Supported and Effects 00:13:51.957 ============================== 00:13:51.957 Admin Commands 00:13:51.957 -------------- 00:13:51.957 Delete I/O Submission Queue (00h): Supported 00:13:51.957 Create I/O Submission Queue (01h): Supported 00:13:51.957 Get Log Page (02h): Supported 00:13:51.957 Delete I/O Completion Queue (04h): Supported 00:13:51.957 Create I/O Completion Queue (05h): Supported 00:13:51.957 Identify (06h): Supported 00:13:51.957 Abort (08h): Supported 00:13:51.957 Set Features (09h): Supported 00:13:51.957 Get Features (0Ah): Supported 00:13:51.957 Asynchronous Event Request (0Ch): Supported 00:13:51.957 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:51.957 Directive Send (19h): Supported 00:13:51.957 Directive Receive (1Ah): Supported 00:13:51.957 Virtualization Management (1Ch): Supported 00:13:51.957 Doorbell Buffer Config (7Ch): Supported 00:13:51.957 Format NVM (80h): Supported LBA-Change 00:13:51.957 I/O Commands 00:13:51.957 ------------ 00:13:51.957 Flush (00h): Supported LBA-Change 00:13:51.957 Write (01h): Supported LBA-Change 00:13:51.957 Read (02h): Supported 00:13:51.957 Compare (05h): Supported 00:13:51.957 Write Zeroes (08h): Supported LBA-Change 00:13:51.957 Dataset Management (09h): Supported LBA-Change 00:13:51.957 Unknown (0Ch): Supported 00:13:51.957 Unknown (12h): Supported 00:13:51.957 Copy (19h): Supported LBA-Change 00:13:51.957 Unknown (1Dh): Supported LBA-Change 00:13:51.957 00:13:51.957 Error Log 00:13:51.957 ========= 00:13:51.957 00:13:51.957 Arbitration 00:13:51.957 =========== 00:13:51.957 Arbitration Burst: no limit 00:13:51.957 00:13:51.957 Power Management 00:13:51.957 ================ 00:13:51.957 Number of Power States: 1 00:13:51.957 Current Power State: Power State #0 00:13:51.957 Power State #0: 00:13:51.957 Max Power: 25.00 W 00:13:51.957 Non-Operational State: Operational 00:13:51.958 Entry Latency: 16 microseconds 00:13:51.958 Exit Latency: 4 microseconds 00:13:51.958 Relative Read Throughput: 0 00:13:51.958 Relative Read Latency: 0 00:13:51.958 Relative Write Throughput: 0 00:13:51.958 Relative Write Latency: 0 00:13:51.958 Idle Power[2024-12-06 13:08:38.960838] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64332 terminated unexpected 00:13:51.958 : Not Reported 00:13:51.958 Active Power: Not Reported 00:13:51.958 Non-Operational Permissive Mode: Not Supported 00:13:51.958 00:13:51.958 Health Information 00:13:51.958 ================== 00:13:51.958 Critical Warnings: 00:13:51.958 Available Spare Space: OK 00:13:51.958 Temperature: OK 00:13:51.958 Device Reliability: OK 00:13:51.958 Read Only: No 00:13:51.958 Volatile Memory Backup: OK 00:13:51.958 Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.958 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:51.958 Available Spare: 0% 00:13:51.958 Available Spare Threshold: 0% 00:13:51.958 Life Percentage Used: 0% 00:13:51.958 Data Units Read: 661 00:13:51.958 Data Units Written: 589 00:13:51.958 Host Read Commands: 32780 00:13:51.958 Host Write Commands: 32566 00:13:51.958 Controller Busy Time: 0 minutes 00:13:51.958 Power Cycles: 0 00:13:51.958 Power On Hours: 0 hours 00:13:51.958 Unsafe Shutdowns: 0 00:13:51.958 Unrecoverable Media Errors: 0 00:13:51.958 Lifetime Error Log Entries: 0 00:13:51.958 Warning Temperature Time: 0 minutes 00:13:51.958 Critical Temperature Time: 0 minutes 00:13:51.958 00:13:51.958 Number of Queues 00:13:51.958 ================ 00:13:51.958 Number of I/O Submission Queues: 64 00:13:51.958 Number of I/O Completion Queues: 64 00:13:51.958 00:13:51.958 ZNS Specific Controller Data 00:13:51.958 ============================ 00:13:51.958 Zone Append Size Limit: 0 00:13:51.958 00:13:51.958 00:13:51.958 Active Namespaces 00:13:51.958 ================= 00:13:51.958 Namespace ID:1 00:13:51.958 Error Recovery Timeout: Unlimited 00:13:51.958 Command Set Identifier: NVM (00h) 00:13:51.958 Deallocate: Supported 00:13:51.958 Deallocated/Unwritten Error: Supported 00:13:51.958 Deallocated Read Value: All 0x00 00:13:51.958 Deallocate in Write Zeroes: Not Supported 00:13:51.958 Deallocated Guard Field: 0xFFFF 00:13:51.958 Flush: Supported 00:13:51.958 Reservation: Not Supported 00:13:51.958 Metadata Transferred as: Separate Metadata Buffer 00:13:51.958 Namespace Sharing Capabilities: Private 00:13:51.958 Size (in LBAs): 1548666 (5GiB) 00:13:51.958 Capacity (in LBAs): 1548666 (5GiB) 00:13:51.958 Utilization (in LBAs): 1548666 (5GiB) 00:13:51.958 Thin Provisioning: Not Supported 00:13:51.958 Per-NS Atomic Units: No 00:13:51.958 Maximum Single Source Range Length: 128 00:13:51.958 Maximum Copy Length: 128 00:13:51.958 Maximum Source Range Count: 128 00:13:51.958 NGUID/EUI64 Never Reused: No 00:13:51.958 Namespace Write Protected: No 00:13:51.958 Number of LBA Formats: 8 00:13:51.958 Current LBA Format: LBA Format #07 00:13:51.958 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:51.958 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:51.958 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:51.958 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:51.958 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:51.958 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:51.958 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:51.958 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:51.958 00:13:51.958 NVM Specific Namespace Data 00:13:51.958 =========================== 00:13:51.958 Logical Block Storage Tag Mask: 0 00:13:51.958 Protection Information Capabilities: 00:13:51.958 16b Guard Protection Information Storage Tag Support: No 00:13:51.958 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:51.958 Storage Tag Check Read Support: No 00:13:51.958 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.958 ===================================================== 00:13:51.958 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:51.958 ===================================================== 00:13:51.958 Controller Capabilities/Features 00:13:51.958 ================================ 00:13:51.958 Vendor ID: 1b36 00:13:51.958 Subsystem Vendor ID: 1af4 00:13:51.958 Serial Number: 12341 00:13:51.958 Model Number: QEMU NVMe Ctrl 00:13:51.958 Firmware Version: 8.0.0 00:13:51.958 Recommended Arb Burst: 6 00:13:51.958 IEEE OUI Identifier: 00 54 52 00:13:51.958 Multi-path I/O 00:13:51.958 May have multiple subsystem ports: No 00:13:51.958 May have multiple controllers: No 00:13:51.958 Associated with SR-IOV VF: No 00:13:51.958 Max Data Transfer Size: 524288 00:13:51.958 Max Number of Namespaces: 256 00:13:51.958 Max Number of I/O Queues: 64 00:13:51.958 NVMe Specification Version (VS): 1.4 00:13:51.958 NVMe Specification Version (Identify): 1.4 00:13:51.958 Maximum Queue Entries: 2048 00:13:51.958 Contiguous Queues Required: Yes 00:13:51.958 Arbitration Mechanisms Supported 00:13:51.958 Weighted Round Robin: Not Supported 00:13:51.958 Vendor Specific: Not Supported 00:13:51.958 Reset Timeout: 7500 ms 00:13:51.958 Doorbell Stride: 4 bytes 00:13:51.958 NVM Subsystem Reset: Not Supported 00:13:51.958 Command Sets Supported 00:13:51.958 NVM Command Set: Supported 00:13:51.958 Boot Partition: Not Supported 00:13:51.958 Memory Page Size Minimum: 4096 bytes 00:13:51.958 Memory Page Size Maximum: 65536 bytes 00:13:51.958 Persistent Memory Region: Not Supported 00:13:51.958 Optional Asynchronous Events Supported 00:13:51.958 Namespace Attribute Notices: Supported 00:13:51.958 Firmware Activation Notices: Not Supported 00:13:51.958 ANA Change Notices: Not Supported 00:13:51.958 PLE Aggregate Log Change Notices: Not Supported 00:13:51.958 LBA Status Info Alert Notices: Not Supported 00:13:51.958 EGE Aggregate Log Change Notices: Not Supported 00:13:51.958 Normal NVM Subsystem Shutdown event: Not Supported 00:13:51.958 Zone Descriptor Change Notices: Not Supported 00:13:51.958 Discovery Log Change Notices: Not Supported 00:13:51.958 Controller Attributes 00:13:51.958 128-bit Host Identifier: Not Supported 00:13:51.958 Non-Operational Permissive Mode: Not Supported 00:13:51.958 NVM Sets: Not Supported 00:13:51.958 Read Recovery Levels: Not Supported 00:13:51.958 Endurance Groups: Not Supported 00:13:51.958 Predictable Latency Mode: Not Supported 00:13:51.958 Traffic Based Keep ALive: Not Supported 00:13:51.958 Namespace Granularity: Not Supported 00:13:51.958 SQ Associations: Not Supported 00:13:51.958 UUID List: Not Supported 00:13:51.958 Multi-Domain Subsystem: Not Supported 00:13:51.958 Fixed Capacity Management: Not Supported 00:13:51.958 Variable Capacity Management: Not Supported 00:13:51.958 Delete Endurance Group: Not Supported 00:13:51.958 Delete NVM Set: Not Supported 00:13:51.958 Extended LBA Formats Supported: Supported 00:13:51.958 Flexible Data Placement Supported: Not Supported 00:13:51.958 00:13:51.958 Controller Memory Buffer Support 00:13:51.958 ================================ 00:13:51.958 Supported: No 00:13:51.958 00:13:51.958 Persistent Memory Region Support 00:13:51.958 ================================ 00:13:51.958 Supported: No 00:13:51.958 00:13:51.958 Admin Command Set Attributes 00:13:51.958 ============================ 00:13:51.958 Security Send/Receive: Not Supported 00:13:51.958 Format NVM: Supported 00:13:51.958 Firmware Activate/Download: Not Supported 00:13:51.958 Namespace Management: Supported 00:13:51.958 Device Self-Test: Not Supported 00:13:51.958 Directives: Supported 00:13:51.958 NVMe-MI: Not Supported 00:13:51.958 Virtualization Management: Not Supported 00:13:51.958 Doorbell Buffer Config: Supported 00:13:51.958 Get LBA Status Capability: Not Supported 00:13:51.958 Command & Feature Lockdown Capability: Not Supported 00:13:51.958 Abort Command Limit: 4 00:13:51.958 Async Event Request Limit: 4 00:13:51.958 Number of Firmware Slots: N/A 00:13:51.958 Firmware Slot 1 Read-Only: N/A 00:13:51.958 Firmware Activation Without Reset: N/A 00:13:51.958 Multiple Update Detection Support: N/A 00:13:51.958 Firmware Update Granularity: No Information Provided 00:13:51.958 Per-Namespace SMART Log: Yes 00:13:51.958 Asymmetric Namespace Access Log Page: Not Supported 00:13:51.958 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:51.958 Command Effects Log Page: Supported 00:13:51.959 Get Log Page Extended Data: Supported 00:13:51.959 Telemetry Log Pages: Not Supported 00:13:51.959 Persistent Event Log Pages: Not Supported 00:13:51.959 Supported Log Pages Log Page: May Support 00:13:51.959 Commands Supported & Effects Log Page: Not Supported 00:13:51.959 Feature Identifiers & Effects Log Page:May Support 00:13:51.959 NVMe-MI Commands & Effects Log Page: May Support 00:13:51.959 Data Area 4 for Telemetry Log: Not Supported 00:13:51.959 Error Log Page Entries Supported: 1 00:13:51.959 Keep Alive: Not Supported 00:13:51.959 00:13:51.959 NVM Command Set Attributes 00:13:51.959 ========================== 00:13:51.959 Submission Queue Entry Size 00:13:51.959 Max: 64 00:13:51.959 Min: 64 00:13:51.959 Completion Queue Entry Size 00:13:51.959 Max: 16 00:13:51.959 Min: 16 00:13:51.959 Number of Namespaces: 256 00:13:51.959 Compare Command: Supported 00:13:51.959 Write Uncorrectable Command: Not Supported 00:13:51.959 Dataset Management Command: Supported 00:13:51.959 Write Zeroes Command: Supported 00:13:51.959 Set Features Save Field: Supported 00:13:51.959 Reservations: Not Supported 00:13:51.959 Timestamp: Supported 00:13:51.959 Copy: Supported 00:13:51.959 Volatile Write Cache: Present 00:13:51.959 Atomic Write Unit (Normal): 1 00:13:51.959 Atomic Write Unit (PFail): 1 00:13:51.959 Atomic Compare & Write Unit: 1 00:13:51.959 Fused Compare & Write: Not Supported 00:13:51.959 Scatter-Gather List 00:13:51.959 SGL Command Set: Supported 00:13:51.959 SGL Keyed: Not Supported 00:13:51.959 SGL Bit Bucket Descriptor: Not Supported 00:13:51.959 SGL Metadata Pointer: Not Supported 00:13:51.959 Oversized SGL: Not Supported 00:13:51.959 SGL Metadata Address: Not Supported 00:13:51.959 SGL Offset: Not Supported 00:13:51.959 Transport SGL Data Block: Not Supported 00:13:51.959 Replay Protected Memory Block: Not Supported 00:13:51.959 00:13:51.959 Firmware Slot Information 00:13:51.959 ========================= 00:13:51.959 Active slot: 1 00:13:51.959 Slot 1 Firmware Revision: 1.0 00:13:51.959 00:13:51.959 00:13:51.959 Commands Supported and Effects 00:13:51.959 ============================== 00:13:51.959 Admin Commands 00:13:51.959 -------------- 00:13:51.959 Delete I/O Submission Queue (00h): Supported 00:13:51.959 Create I/O Submission Queue (01h): Supported 00:13:51.959 Get Log Page (02h): Supported 00:13:51.959 Delete I/O Completion Queue (04h): Supported 00:13:51.959 Create I/O Completion Queue (05h): Supported 00:13:51.959 Identify (06h): Supported 00:13:51.959 Abort (08h): Supported 00:13:51.959 Set Features (09h): Supported 00:13:51.959 Get Features (0Ah): Supported 00:13:51.959 Asynchronous Event Request (0Ch): Supported 00:13:51.959 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:51.959 Directive Send (19h): Supported 00:13:51.959 Directive Receive (1Ah): Supported 00:13:51.959 Virtualization Management (1Ch): Supported 00:13:51.959 Doorbell Buffer Config (7Ch): Supported 00:13:51.959 Format NVM (80h): Supported LBA-Change 00:13:51.959 I/O Commands 00:13:51.959 ------------ 00:13:51.959 Flush (00h): Supported LBA-Change 00:13:51.959 Write (01h): Supported LBA-Change 00:13:51.959 Read (02h): Supported 00:13:51.959 Compare (05h): Supported 00:13:51.959 Write Zeroes (08h): Supported LBA-Change 00:13:51.959 Dataset Management (09h): Supported LBA-Change 00:13:51.959 Unknown (0Ch): Supported 00:13:51.959 Unknown (12h): Supported 00:13:51.959 Copy (19h): Supported LBA-Change 00:13:51.959 Unknown (1Dh): Supported LBA-Change 00:13:51.959 00:13:51.959 Error Log 00:13:51.959 ========= 00:13:51.959 00:13:51.959 Arbitration 00:13:51.959 =========== 00:13:51.959 Arbitration Burst: no limit 00:13:51.959 00:13:51.959 Power Management 00:13:51.959 ================ 00:13:51.959 Number of Power States: 1 00:13:51.959 Current Power State: Power State #0 00:13:51.959 Power State #0: 00:13:51.959 Max Power: 25.00 W 00:13:51.959 Non-Operational State: Operational 00:13:51.959 Entry Latency: 16 microseconds 00:13:51.959 Exit Latency: 4 microseconds 00:13:51.959 Relative Read Throughput: 0 00:13:51.959 Relative Read Latency: 0 00:13:51.959 Relative Write Throughput: 0 00:13:51.959 Relative Write Latency: 0 00:13:51.959 Idle Power: Not Reported 00:13:51.959 Active Power: Not Reported 00:13:51.959 Non-Operational Permissive Mode: Not Supported 00:13:51.959 00:13:51.959 Health Information 00:13:51.959 ================== 00:13:51.959 Critical Warnings: 00:13:51.959 Available Spare Space: OK 00:13:51.959 Temperature: [2024-12-06 13:08:38.961767] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64332 terminated unexpected 00:13:51.959 OK 00:13:51.959 Device Reliability: OK 00:13:51.959 Read Only: No 00:13:51.959 Volatile Memory Backup: OK 00:13:51.959 Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.959 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:51.959 Available Spare: 0% 00:13:51.959 Available Spare Threshold: 0% 00:13:51.959 Life Percentage Used: 0% 00:13:51.959 Data Units Read: 1009 00:13:51.959 Data Units Written: 876 00:13:51.959 Host Read Commands: 48251 00:13:51.959 Host Write Commands: 47052 00:13:51.959 Controller Busy Time: 0 minutes 00:13:51.959 Power Cycles: 0 00:13:51.959 Power On Hours: 0 hours 00:13:51.959 Unsafe Shutdowns: 0 00:13:51.959 Unrecoverable Media Errors: 0 00:13:51.959 Lifetime Error Log Entries: 0 00:13:51.959 Warning Temperature Time: 0 minutes 00:13:51.959 Critical Temperature Time: 0 minutes 00:13:51.959 00:13:51.959 Number of Queues 00:13:51.959 ================ 00:13:51.959 Number of I/O Submission Queues: 64 00:13:51.959 Number of I/O Completion Queues: 64 00:13:51.959 00:13:51.959 ZNS Specific Controller Data 00:13:51.959 ============================ 00:13:51.959 Zone Append Size Limit: 0 00:13:51.959 00:13:51.959 00:13:51.959 Active Namespaces 00:13:51.959 ================= 00:13:51.959 Namespace ID:1 00:13:51.959 Error Recovery Timeout: Unlimited 00:13:51.959 Command Set Identifier: NVM (00h) 00:13:51.959 Deallocate: Supported 00:13:51.959 Deallocated/Unwritten Error: Supported 00:13:51.959 Deallocated Read Value: All 0x00 00:13:51.959 Deallocate in Write Zeroes: Not Supported 00:13:51.959 Deallocated Guard Field: 0xFFFF 00:13:51.959 Flush: Supported 00:13:51.959 Reservation: Not Supported 00:13:51.959 Namespace Sharing Capabilities: Private 00:13:51.959 Size (in LBAs): 1310720 (5GiB) 00:13:51.959 Capacity (in LBAs): 1310720 (5GiB) 00:13:51.959 Utilization (in LBAs): 1310720 (5GiB) 00:13:51.959 Thin Provisioning: Not Supported 00:13:51.959 Per-NS Atomic Units: No 00:13:51.959 Maximum Single Source Range Length: 128 00:13:51.959 Maximum Copy Length: 128 00:13:51.959 Maximum Source Range Count: 128 00:13:51.959 NGUID/EUI64 Never Reused: No 00:13:51.959 Namespace Write Protected: No 00:13:51.959 Number of LBA Formats: 8 00:13:51.959 Current LBA Format: LBA Format #04 00:13:51.959 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:51.959 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:51.959 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:51.959 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:51.959 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:51.959 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:51.959 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:51.959 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:51.959 00:13:51.959 NVM Specific Namespace Data 00:13:51.959 =========================== 00:13:51.959 Logical Block Storage Tag Mask: 0 00:13:51.959 Protection Information Capabilities: 00:13:51.959 16b Guard Protection Information Storage Tag Support: No 00:13:51.959 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:51.959 Storage Tag Check Read Support: No 00:13:51.959 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.959 ===================================================== 00:13:51.959 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:51.959 ===================================================== 00:13:51.959 Controller Capabilities/Features 00:13:51.959 ================================ 00:13:51.959 Vendor ID: 1b36 00:13:51.959 Subsystem Vendor ID: 1af4 00:13:51.959 Serial Number: 12343 00:13:51.959 Model Number: QEMU NVMe Ctrl 00:13:51.959 Firmware Version: 8.0.0 00:13:51.960 Recommended Arb Burst: 6 00:13:51.960 IEEE OUI Identifier: 00 54 52 00:13:51.960 Multi-path I/O 00:13:51.960 May have multiple subsystem ports: No 00:13:51.960 May have multiple controllers: Yes 00:13:51.960 Associated with SR-IOV VF: No 00:13:51.960 Max Data Transfer Size: 524288 00:13:51.960 Max Number of Namespaces: 256 00:13:51.960 Max Number of I/O Queues: 64 00:13:51.960 NVMe Specification Version (VS): 1.4 00:13:51.960 NVMe Specification Version (Identify): 1.4 00:13:51.960 Maximum Queue Entries: 2048 00:13:51.960 Contiguous Queues Required: Yes 00:13:51.960 Arbitration Mechanisms Supported 00:13:51.960 Weighted Round Robin: Not Supported 00:13:51.960 Vendor Specific: Not Supported 00:13:51.960 Reset Timeout: 7500 ms 00:13:51.960 Doorbell Stride: 4 bytes 00:13:51.960 NVM Subsystem Reset: Not Supported 00:13:51.960 Command Sets Supported 00:13:51.960 NVM Command Set: Supported 00:13:51.960 Boot Partition: Not Supported 00:13:51.960 Memory Page Size Minimum: 4096 bytes 00:13:51.960 Memory Page Size Maximum: 65536 bytes 00:13:51.960 Persistent Memory Region: Not Supported 00:13:51.960 Optional Asynchronous Events Supported 00:13:51.960 Namespace Attribute Notices: Supported 00:13:51.960 Firmware Activation Notices: Not Supported 00:13:51.960 ANA Change Notices: Not Supported 00:13:51.960 PLE Aggregate Log Change Notices: Not Supported 00:13:51.960 LBA Status Info Alert Notices: Not Supported 00:13:51.960 EGE Aggregate Log Change Notices: Not Supported 00:13:51.960 Normal NVM Subsystem Shutdown event: Not Supported 00:13:51.960 Zone Descriptor Change Notices: Not Supported 00:13:51.960 Discovery Log Change Notices: Not Supported 00:13:51.960 Controller Attributes 00:13:51.960 128-bit Host Identifier: Not Supported 00:13:51.960 Non-Operational Permissive Mode: Not Supported 00:13:51.960 NVM Sets: Not Supported 00:13:51.960 Read Recovery Levels: Not Supported 00:13:51.960 Endurance Groups: Supported 00:13:51.960 Predictable Latency Mode: Not Supported 00:13:51.960 Traffic Based Keep ALive: Not Supported 00:13:51.960 Namespace Granularity: Not Supported 00:13:51.960 SQ Associations: Not Supported 00:13:51.960 UUID List: Not Supported 00:13:51.960 Multi-Domain Subsystem: Not Supported 00:13:51.960 Fixed Capacity Management: Not Supported 00:13:51.960 Variable Capacity Management: Not Supported 00:13:51.960 Delete Endurance Group: Not Supported 00:13:51.960 Delete NVM Set: Not Supported 00:13:51.960 Extended LBA Formats Supported: Supported 00:13:51.960 Flexible Data Placement Supported: Supported 00:13:51.960 00:13:51.960 Controller Memory Buffer Support 00:13:51.960 ================================ 00:13:51.960 Supported: No 00:13:51.960 00:13:51.960 Persistent Memory Region Support 00:13:51.960 ================================ 00:13:51.960 Supported: No 00:13:51.960 00:13:51.960 Admin Command Set Attributes 00:13:51.960 ============================ 00:13:51.960 Security Send/Receive: Not Supported 00:13:51.960 Format NVM: Supported 00:13:51.960 Firmware Activate/Download: Not Supported 00:13:51.960 Namespace Management: Supported 00:13:51.960 Device Self-Test: Not Supported 00:13:51.960 Directives: Supported 00:13:51.960 NVMe-MI: Not Supported 00:13:51.960 Virtualization Management: Not Supported 00:13:51.960 Doorbell Buffer Config: Supported 00:13:51.960 Get LBA Status Capability: Not Supported 00:13:51.960 Command & Feature Lockdown Capability: Not Supported 00:13:51.960 Abort Command Limit: 4 00:13:51.960 Async Event Request Limit: 4 00:13:51.960 Number of Firmware Slots: N/A 00:13:51.960 Firmware Slot 1 Read-Only: N/A 00:13:51.960 Firmware Activation Without Reset: N/A 00:13:51.960 Multiple Update Detection Support: N/A 00:13:51.960 Firmware Update Granularity: No Information Provided 00:13:51.960 Per-Namespace SMART Log: Yes 00:13:51.960 Asymmetric Namespace Access Log Page: Not Supported 00:13:51.960 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:51.960 Command Effects Log Page: Supported 00:13:51.960 Get Log Page Extended Data: Supported 00:13:51.960 Telemetry Log Pages: Not Supported 00:13:51.960 Persistent Event Log Pages: Not Supported 00:13:51.960 Supported Log Pages Log Page: May Support 00:13:51.960 Commands Supported & Effects Log Page: Not Supported 00:13:51.960 Feature Identifiers & Effects Log Page:May Support 00:13:51.960 NVMe-MI Commands & Effects Log Page: May Support 00:13:51.960 Data Area 4 for Telemetry Log: Not Supported 00:13:51.960 Error Log Page Entries Supported: 1 00:13:51.960 Keep Alive: Not Supported 00:13:51.960 00:13:51.960 NVM Command Set Attributes 00:13:51.960 ========================== 00:13:51.960 Submission Queue Entry Size 00:13:51.960 Max: 64 00:13:51.960 Min: 64 00:13:51.960 Completion Queue Entry Size 00:13:51.960 Max: 16 00:13:51.960 Min: 16 00:13:51.960 Number of Namespaces: 256 00:13:51.960 Compare Command: Supported 00:13:51.960 Write Uncorrectable Command: Not Supported 00:13:51.960 Dataset Management Command: Supported 00:13:51.960 Write Zeroes Command: Supported 00:13:51.960 Set Features Save Field: Supported 00:13:51.960 Reservations: Not Supported 00:13:51.960 Timestamp: Supported 00:13:51.960 Copy: Supported 00:13:51.960 Volatile Write Cache: Present 00:13:51.960 Atomic Write Unit (Normal): 1 00:13:51.960 Atomic Write Unit (PFail): 1 00:13:51.960 Atomic Compare & Write Unit: 1 00:13:51.960 Fused Compare & Write: Not Supported 00:13:51.960 Scatter-Gather List 00:13:51.960 SGL Command Set: Supported 00:13:51.960 SGL Keyed: Not Supported 00:13:51.960 SGL Bit Bucket Descriptor: Not Supported 00:13:51.960 SGL Metadata Pointer: Not Supported 00:13:51.960 Oversized SGL: Not Supported 00:13:51.960 SGL Metadata Address: Not Supported 00:13:51.960 SGL Offset: Not Supported 00:13:51.960 Transport SGL Data Block: Not Supported 00:13:51.960 Replay Protected Memory Block: Not Supported 00:13:51.960 00:13:51.960 Firmware Slot Information 00:13:51.960 ========================= 00:13:51.960 Active slot: 1 00:13:51.960 Slot 1 Firmware Revision: 1.0 00:13:51.960 00:13:51.960 00:13:51.960 Commands Supported and Effects 00:13:51.960 ============================== 00:13:51.960 Admin Commands 00:13:51.960 -------------- 00:13:51.960 Delete I/O Submission Queue (00h): Supported 00:13:51.960 Create I/O Submission Queue (01h): Supported 00:13:51.960 Get Log Page (02h): Supported 00:13:51.960 Delete I/O Completion Queue (04h): Supported 00:13:51.960 Create I/O Completion Queue (05h): Supported 00:13:51.960 Identify (06h): Supported 00:13:51.960 Abort (08h): Supported 00:13:51.960 Set Features (09h): Supported 00:13:51.960 Get Features (0Ah): Supported 00:13:51.960 Asynchronous Event Request (0Ch): Supported 00:13:51.960 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:51.960 Directive Send (19h): Supported 00:13:51.960 Directive Receive (1Ah): Supported 00:13:51.960 Virtualization Management (1Ch): Supported 00:13:51.960 Doorbell Buffer Config (7Ch): Supported 00:13:51.960 Format NVM (80h): Supported LBA-Change 00:13:51.960 I/O Commands 00:13:51.960 ------------ 00:13:51.960 Flush (00h): Supported LBA-Change 00:13:51.960 Write (01h): Supported LBA-Change 00:13:51.960 Read (02h): Supported 00:13:51.960 Compare (05h): Supported 00:13:51.960 Write Zeroes (08h): Supported LBA-Change 00:13:51.960 Dataset Management (09h): Supported LBA-Change 00:13:51.960 Unknown (0Ch): Supported 00:13:51.960 Unknown (12h): Supported 00:13:51.960 Copy (19h): Supported LBA-Change 00:13:51.960 Unknown (1Dh): Supported LBA-Change 00:13:51.960 00:13:51.960 Error Log 00:13:51.960 ========= 00:13:51.960 00:13:51.960 Arbitration 00:13:51.960 =========== 00:13:51.960 Arbitration Burst: no limit 00:13:51.960 00:13:51.960 Power Management 00:13:51.960 ================ 00:13:51.960 Number of Power States: 1 00:13:51.960 Current Power State: Power State #0 00:13:51.960 Power State #0: 00:13:51.960 Max Power: 25.00 W 00:13:51.960 Non-Operational State: Operational 00:13:51.960 Entry Latency: 16 microseconds 00:13:51.960 Exit Latency: 4 microseconds 00:13:51.960 Relative Read Throughput: 0 00:13:51.960 Relative Read Latency: 0 00:13:51.960 Relative Write Throughput: 0 00:13:51.960 Relative Write Latency: 0 00:13:51.960 Idle Power: Not Reported 00:13:51.960 Active Power: Not Reported 00:13:51.960 Non-Operational Permissive Mode: Not Supported 00:13:51.960 00:13:51.960 Health Information 00:13:51.960 ================== 00:13:51.960 Critical Warnings: 00:13:51.960 Available Spare Space: OK 00:13:51.960 Temperature: OK 00:13:51.960 Device Reliability: OK 00:13:51.960 Read Only: No 00:13:51.960 Volatile Memory Backup: OK 00:13:51.960 Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.960 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:51.960 Available Spare: 0% 00:13:51.960 Available Spare Threshold: 0% 00:13:51.961 Life Percentage Used: 0% 00:13:51.961 Data Units Read: 777 00:13:51.961 Data Units Written: 706 00:13:51.961 Host Read Commands: 33984 00:13:51.961 Host Write Commands: 33407 00:13:51.961 Controller Busy Time: 0 minutes 00:13:51.961 Power Cycles: 0 00:13:51.961 Power On Hours: 0 hours 00:13:51.961 Unsafe Shutdowns: 0 00:13:51.961 Unrecoverable Media Errors: 0 00:13:51.961 Lifetime Error Log Entries: 0 00:13:51.961 Warning Temperature Time: 0 minutes 00:13:51.961 Critical Temperature Time: 0 minutes 00:13:51.961 00:13:51.961 Number of Queues 00:13:51.961 ================ 00:13:51.961 Number of I/O Submission Queues: 64 00:13:51.961 Number of I/O Completion Queues: 64 00:13:51.961 00:13:51.961 ZNS Specific Controller Data 00:13:51.961 ============================ 00:13:51.961 Zone Append Size Limit: 0 00:13:51.961 00:13:51.961 00:13:51.961 Active Namespaces 00:13:51.961 ================= 00:13:51.961 Namespace ID:1 00:13:51.961 Error Recovery Timeout: Unlimited 00:13:51.961 Command Set Identifier: NVM (00h) 00:13:51.961 Deallocate: Supported 00:13:51.961 Deallocated/Unwritten Error: Supported 00:13:51.961 Deallocated Read Value: All 0x00 00:13:51.961 Deallocate in Write Zeroes: Not Supported 00:13:51.961 Deallocated Guard Field: 0xFFFF 00:13:51.961 Flush: Supported 00:13:51.961 Reservation: Not Supported 00:13:51.961 Namespace Sharing Capabilities: Multiple Controllers 00:13:51.961 Size (in LBAs): 262144 (1GiB) 00:13:51.961 Capacity (in LBAs): 262144 (1GiB) 00:13:51.961 Utilization (in LBAs): 262144 (1GiB) 00:13:51.961 Thin Provisioning: Not Supported 00:13:51.961 Per-NS Atomic Units: No 00:13:51.961 Maximum Single Source Range Length: 128 00:13:51.961 Maximum Copy Length: 128 00:13:51.961 Maximum Source Range Count: 128 00:13:51.961 NGUID/EUI64 Never Reused: No 00:13:51.961 Namespace Write Protected: No 00:13:51.961 Endurance group ID: 1 00:13:51.961 Number of LBA Formats: 8 00:13:51.961 Current LBA Format: LBA Format #04 00:13:51.961 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:51.961 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:51.961 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:51.961 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:51.961 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:51.961 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:51.961 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:51.961 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:51.961 00:13:51.961 Get Feature FDP: 00:13:51.961 ================ 00:13:51.961 Enabled: Yes 00:13:51.961 FDP configuration index: 0 00:13:51.961 00:13:51.961 FDP configurations log page 00:13:51.961 =========================== 00:13:51.961 Number of FDP configurations: 1 00:13:51.961 Version: 0 00:13:51.961 Size: 112 00:13:51.961 FDP Configuration Descriptor: 0 00:13:51.961 Descriptor Size: 96 00:13:51.961 Reclaim Group Identifier format: 2 00:13:51.961 FDP Volatile Write Cache: Not Present 00:13:51.961 FDP Configuration: Valid 00:13:51.961 Vendor Specific Size: 0 00:13:51.961 Number of Reclaim Groups: 2 00:13:51.961 Number of Recalim Unit Handles: 8 00:13:51.961 Max Placement Identifiers: 128 00:13:51.961 Number of Namespaces Suppprted: 256 00:13:51.961 Reclaim unit Nominal Size: 6000000 bytes 00:13:51.961 Estimated Reclaim Unit Time Limit: Not Reported 00:13:51.961 RUH Desc #000: RUH Type: Initially Isolated 00:13:51.961 RUH Desc #001: RUH Type: Initially Isolated 00:13:51.961 RUH Desc #002: RUH Type: Initially Isolated 00:13:51.961 RUH Desc #003: RUH Type: Initially Isolated 00:13:51.961 RUH Desc #004: RUH Type: Initially Isolated 00:13:51.961 RUH Desc #005: RUH Type: Initially Isolated 00:13:51.961 RUH Desc #006: RUH Type: Initially Isolated 00:13:51.961 RUH Desc #007: RUH Type: Initially Isolated 00:13:51.961 00:13:51.961 FDP reclaim unit handle usage log page 00:13:51.961 ====================================== 00:13:51.961 Number of Reclaim Unit Handles: 8 00:13:51.961 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:51.961 RUH Usage Desc #001: RUH Attributes: Unused 00:13:51.961 RUH Usage Desc #002: RUH Attributes: Unused 00:13:51.961 RUH Usage Desc #003: RUH Attributes: Unused 00:13:51.961 RUH Usage Desc #004: RUH Attributes: Unused 00:13:51.961 RUH Usage Desc #005: RUH Attributes: Unused 00:13:51.961 RUH Usage Desc #006: RUH Attributes: Unused 00:13:51.961 RUH Usage Desc #007: RUH Attributes: Unused 00:13:51.961 00:13:51.961 FDP statistics log page 00:13:51.961 ======================= 00:13:51.961 Host bytes with metadata written: 446210048 00:13:51.961 Medi[2024-12-06 13:08:38.963725] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64332 terminated unexpected 00:13:51.961 a bytes with metadata written: 446275584 00:13:51.961 Media bytes erased: 0 00:13:51.961 00:13:51.961 FDP events log page 00:13:51.961 =================== 00:13:51.961 Number of FDP events: 0 00:13:51.961 00:13:51.961 NVM Specific Namespace Data 00:13:51.961 =========================== 00:13:51.961 Logical Block Storage Tag Mask: 0 00:13:51.961 Protection Information Capabilities: 00:13:51.961 16b Guard Protection Information Storage Tag Support: No 00:13:51.961 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:51.961 Storage Tag Check Read Support: No 00:13:51.961 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.961 ===================================================== 00:13:51.961 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:51.961 ===================================================== 00:13:51.961 Controller Capabilities/Features 00:13:51.961 ================================ 00:13:51.961 Vendor ID: 1b36 00:13:51.961 Subsystem Vendor ID: 1af4 00:13:51.961 Serial Number: 12342 00:13:51.961 Model Number: QEMU NVMe Ctrl 00:13:51.961 Firmware Version: 8.0.0 00:13:51.961 Recommended Arb Burst: 6 00:13:51.961 IEEE OUI Identifier: 00 54 52 00:13:51.961 Multi-path I/O 00:13:51.961 May have multiple subsystem ports: No 00:13:51.961 May have multiple controllers: No 00:13:51.961 Associated with SR-IOV VF: No 00:13:51.961 Max Data Transfer Size: 524288 00:13:51.961 Max Number of Namespaces: 256 00:13:51.961 Max Number of I/O Queues: 64 00:13:51.961 NVMe Specification Version (VS): 1.4 00:13:51.961 NVMe Specification Version (Identify): 1.4 00:13:51.961 Maximum Queue Entries: 2048 00:13:51.961 Contiguous Queues Required: Yes 00:13:51.961 Arbitration Mechanisms Supported 00:13:51.961 Weighted Round Robin: Not Supported 00:13:51.961 Vendor Specific: Not Supported 00:13:51.961 Reset Timeout: 7500 ms 00:13:51.961 Doorbell Stride: 4 bytes 00:13:51.961 NVM Subsystem Reset: Not Supported 00:13:51.961 Command Sets Supported 00:13:51.961 NVM Command Set: Supported 00:13:51.961 Boot Partition: Not Supported 00:13:51.961 Memory Page Size Minimum: 4096 bytes 00:13:51.961 Memory Page Size Maximum: 65536 bytes 00:13:51.961 Persistent Memory Region: Not Supported 00:13:51.961 Optional Asynchronous Events Supported 00:13:51.961 Namespace Attribute Notices: Supported 00:13:51.962 Firmware Activation Notices: Not Supported 00:13:51.962 ANA Change Notices: Not Supported 00:13:51.962 PLE Aggregate Log Change Notices: Not Supported 00:13:51.962 LBA Status Info Alert Notices: Not Supported 00:13:51.962 EGE Aggregate Log Change Notices: Not Supported 00:13:51.962 Normal NVM Subsystem Shutdown event: Not Supported 00:13:51.962 Zone Descriptor Change Notices: Not Supported 00:13:51.962 Discovery Log Change Notices: Not Supported 00:13:51.962 Controller Attributes 00:13:51.962 128-bit Host Identifier: Not Supported 00:13:51.962 Non-Operational Permissive Mode: Not Supported 00:13:51.962 NVM Sets: Not Supported 00:13:51.962 Read Recovery Levels: Not Supported 00:13:51.962 Endurance Groups: Not Supported 00:13:51.962 Predictable Latency Mode: Not Supported 00:13:51.962 Traffic Based Keep ALive: Not Supported 00:13:51.962 Namespace Granularity: Not Supported 00:13:51.962 SQ Associations: Not Supported 00:13:51.962 UUID List: Not Supported 00:13:51.962 Multi-Domain Subsystem: Not Supported 00:13:51.962 Fixed Capacity Management: Not Supported 00:13:51.962 Variable Capacity Management: Not Supported 00:13:51.962 Delete Endurance Group: Not Supported 00:13:51.962 Delete NVM Set: Not Supported 00:13:51.962 Extended LBA Formats Supported: Supported 00:13:51.962 Flexible Data Placement Supported: Not Supported 00:13:51.962 00:13:51.962 Controller Memory Buffer Support 00:13:51.962 ================================ 00:13:51.962 Supported: No 00:13:51.962 00:13:51.962 Persistent Memory Region Support 00:13:51.962 ================================ 00:13:51.962 Supported: No 00:13:51.962 00:13:51.962 Admin Command Set Attributes 00:13:51.962 ============================ 00:13:51.962 Security Send/Receive: Not Supported 00:13:51.962 Format NVM: Supported 00:13:51.962 Firmware Activate/Download: Not Supported 00:13:51.962 Namespace Management: Supported 00:13:51.962 Device Self-Test: Not Supported 00:13:51.962 Directives: Supported 00:13:51.962 NVMe-MI: Not Supported 00:13:51.962 Virtualization Management: Not Supported 00:13:51.962 Doorbell Buffer Config: Supported 00:13:51.962 Get LBA Status Capability: Not Supported 00:13:51.962 Command & Feature Lockdown Capability: Not Supported 00:13:51.962 Abort Command Limit: 4 00:13:51.962 Async Event Request Limit: 4 00:13:51.962 Number of Firmware Slots: N/A 00:13:51.962 Firmware Slot 1 Read-Only: N/A 00:13:51.962 Firmware Activation Without Reset: N/A 00:13:51.962 Multiple Update Detection Support: N/A 00:13:51.962 Firmware Update Granularity: No Information Provided 00:13:51.962 Per-Namespace SMART Log: Yes 00:13:51.962 Asymmetric Namespace Access Log Page: Not Supported 00:13:51.962 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:51.962 Command Effects Log Page: Supported 00:13:51.962 Get Log Page Extended Data: Supported 00:13:51.962 Telemetry Log Pages: Not Supported 00:13:51.962 Persistent Event Log Pages: Not Supported 00:13:51.962 Supported Log Pages Log Page: May Support 00:13:51.962 Commands Supported & Effects Log Page: Not Supported 00:13:51.962 Feature Identifiers & Effects Log Page:May Support 00:13:51.962 NVMe-MI Commands & Effects Log Page: May Support 00:13:51.962 Data Area 4 for Telemetry Log: Not Supported 00:13:51.962 Error Log Page Entries Supported: 1 00:13:51.962 Keep Alive: Not Supported 00:13:51.962 00:13:51.962 NVM Command Set Attributes 00:13:51.962 ========================== 00:13:51.962 Submission Queue Entry Size 00:13:51.962 Max: 64 00:13:51.962 Min: 64 00:13:51.962 Completion Queue Entry Size 00:13:51.962 Max: 16 00:13:51.962 Min: 16 00:13:51.962 Number of Namespaces: 256 00:13:51.962 Compare Command: Supported 00:13:51.962 Write Uncorrectable Command: Not Supported 00:13:51.962 Dataset Management Command: Supported 00:13:51.962 Write Zeroes Command: Supported 00:13:51.962 Set Features Save Field: Supported 00:13:51.962 Reservations: Not Supported 00:13:51.962 Timestamp: Supported 00:13:51.962 Copy: Supported 00:13:51.962 Volatile Write Cache: Present 00:13:51.962 Atomic Write Unit (Normal): 1 00:13:51.962 Atomic Write Unit (PFail): 1 00:13:51.962 Atomic Compare & Write Unit: 1 00:13:51.962 Fused Compare & Write: Not Supported 00:13:51.962 Scatter-Gather List 00:13:51.962 SGL Command Set: Supported 00:13:51.962 SGL Keyed: Not Supported 00:13:51.962 SGL Bit Bucket Descriptor: Not Supported 00:13:51.962 SGL Metadata Pointer: Not Supported 00:13:51.962 Oversized SGL: Not Supported 00:13:51.962 SGL Metadata Address: Not Supported 00:13:51.962 SGL Offset: Not Supported 00:13:51.962 Transport SGL Data Block: Not Supported 00:13:51.962 Replay Protected Memory Block: Not Supported 00:13:51.962 00:13:51.962 Firmware Slot Information 00:13:51.962 ========================= 00:13:51.962 Active slot: 1 00:13:51.962 Slot 1 Firmware Revision: 1.0 00:13:51.962 00:13:51.962 00:13:51.962 Commands Supported and Effects 00:13:51.962 ============================== 00:13:51.962 Admin Commands 00:13:51.962 -------------- 00:13:51.962 Delete I/O Submission Queue (00h): Supported 00:13:51.962 Create I/O Submission Queue (01h): Supported 00:13:51.962 Get Log Page (02h): Supported 00:13:51.962 Delete I/O Completion Queue (04h): Supported 00:13:51.962 Create I/O Completion Queue (05h): Supported 00:13:51.962 Identify (06h): Supported 00:13:51.962 Abort (08h): Supported 00:13:51.962 Set Features (09h): Supported 00:13:51.962 Get Features (0Ah): Supported 00:13:51.962 Asynchronous Event Request (0Ch): Supported 00:13:51.962 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:51.962 Directive Send (19h): Supported 00:13:51.962 Directive Receive (1Ah): Supported 00:13:51.962 Virtualization Management (1Ch): Supported 00:13:51.962 Doorbell Buffer Config (7Ch): Supported 00:13:51.962 Format NVM (80h): Supported LBA-Change 00:13:51.962 I/O Commands 00:13:51.962 ------------ 00:13:51.962 Flush (00h): Supported LBA-Change 00:13:51.962 Write (01h): Supported LBA-Change 00:13:51.962 Read (02h): Supported 00:13:51.962 Compare (05h): Supported 00:13:51.962 Write Zeroes (08h): Supported LBA-Change 00:13:51.962 Dataset Management (09h): Supported LBA-Change 00:13:51.962 Unknown (0Ch): Supported 00:13:51.962 Unknown (12h): Supported 00:13:51.962 Copy (19h): Supported LBA-Change 00:13:51.962 Unknown (1Dh): Supported LBA-Change 00:13:51.962 00:13:51.962 Error Log 00:13:51.962 ========= 00:13:51.962 00:13:51.962 Arbitration 00:13:51.962 =========== 00:13:51.962 Arbitration Burst: no limit 00:13:51.962 00:13:51.962 Power Management 00:13:51.962 ================ 00:13:51.962 Number of Power States: 1 00:13:51.962 Current Power State: Power State #0 00:13:51.962 Power State #0: 00:13:51.962 Max Power: 25.00 W 00:13:51.962 Non-Operational State: Operational 00:13:51.962 Entry Latency: 16 microseconds 00:13:51.962 Exit Latency: 4 microseconds 00:13:51.962 Relative Read Throughput: 0 00:13:51.962 Relative Read Latency: 0 00:13:51.962 Relative Write Throughput: 0 00:13:51.962 Relative Write Latency: 0 00:13:51.962 Idle Power: Not Reported 00:13:51.962 Active Power: Not Reported 00:13:51.962 Non-Operational Permissive Mode: Not Supported 00:13:51.962 00:13:51.962 Health Information 00:13:51.962 ================== 00:13:51.962 Critical Warnings: 00:13:51.962 Available Spare Space: OK 00:13:51.962 Temperature: OK 00:13:51.962 Device Reliability: OK 00:13:51.962 Read Only: No 00:13:51.962 Volatile Memory Backup: OK 00:13:51.962 Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.962 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:51.962 Available Spare: 0% 00:13:51.962 Available Spare Threshold: 0% 00:13:51.962 Life Percentage Used: 0% 00:13:51.962 Data Units Read: 2084 00:13:51.962 Data Units Written: 1871 00:13:51.962 Host Read Commands: 99562 00:13:51.962 Host Write Commands: 97831 00:13:51.962 Controller Busy Time: 0 minutes 00:13:51.962 Power Cycles: 0 00:13:51.962 Power On Hours: 0 hours 00:13:51.962 Unsafe Shutdowns: 0 00:13:51.962 Unrecoverable Media Errors: 0 00:13:51.962 Lifetime Error Log Entries: 0 00:13:51.962 Warning Temperature Time: 0 minutes 00:13:51.962 Critical Temperature Time: 0 minutes 00:13:51.962 00:13:51.962 Number of Queues 00:13:51.962 ================ 00:13:51.962 Number of I/O Submission Queues: 64 00:13:51.962 Number of I/O Completion Queues: 64 00:13:51.962 00:13:51.962 ZNS Specific Controller Data 00:13:51.962 ============================ 00:13:51.962 Zone Append Size Limit: 0 00:13:51.962 00:13:51.962 00:13:51.962 Active Namespaces 00:13:51.962 ================= 00:13:51.962 Namespace ID:1 00:13:51.962 Error Recovery Timeout: Unlimited 00:13:51.962 Command Set Identifier: NVM (00h) 00:13:51.962 Deallocate: Supported 00:13:51.962 Deallocated/Unwritten Error: Supported 00:13:51.963 Deallocated Read Value: All 0x00 00:13:51.963 Deallocate in Write Zeroes: Not Supported 00:13:51.963 Deallocated Guard Field: 0xFFFF 00:13:51.963 Flush: Supported 00:13:51.963 Reservation: Not Supported 00:13:51.963 Namespace Sharing Capabilities: Private 00:13:51.963 Size (in LBAs): 1048576 (4GiB) 00:13:51.963 Capacity (in LBAs): 1048576 (4GiB) 00:13:51.963 Utilization (in LBAs): 1048576 (4GiB) 00:13:51.963 Thin Provisioning: Not Supported 00:13:51.963 Per-NS Atomic Units: No 00:13:51.963 Maximum Single Source Range Length: 128 00:13:51.963 Maximum Copy Length: 128 00:13:51.963 Maximum Source Range Count: 128 00:13:51.963 NGUID/EUI64 Never Reused: No 00:13:51.963 Namespace Write Protected: No 00:13:51.963 Number of LBA Formats: 8 00:13:51.963 Current LBA Format: LBA Format #04 00:13:51.963 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:51.963 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:51.963 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:51.963 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:51.963 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:51.963 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:51.963 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:51.963 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:51.963 00:13:51.963 NVM Specific Namespace Data 00:13:51.963 =========================== 00:13:51.963 Logical Block Storage Tag Mask: 0 00:13:51.963 Protection Information Capabilities: 00:13:51.963 16b Guard Protection Information Storage Tag Support: No 00:13:51.963 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:51.963 Storage Tag Check Read Support: No 00:13:51.963 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Namespace ID:2 00:13:51.963 Error Recovery Timeout: Unlimited 00:13:51.963 Command Set Identifier: NVM (00h) 00:13:51.963 Deallocate: Supported 00:13:51.963 Deallocated/Unwritten Error: Supported 00:13:51.963 Deallocated Read Value: All 0x00 00:13:51.963 Deallocate in Write Zeroes: Not Supported 00:13:51.963 Deallocated Guard Field: 0xFFFF 00:13:51.963 Flush: Supported 00:13:51.963 Reservation: Not Supported 00:13:51.963 Namespace Sharing Capabilities: Private 00:13:51.963 Size (in LBAs): 1048576 (4GiB) 00:13:51.963 Capacity (in LBAs): 1048576 (4GiB) 00:13:51.963 Utilization (in LBAs): 1048576 (4GiB) 00:13:51.963 Thin Provisioning: Not Supported 00:13:51.963 Per-NS Atomic Units: No 00:13:51.963 Maximum Single Source Range Length: 128 00:13:51.963 Maximum Copy Length: 128 00:13:51.963 Maximum Source Range Count: 128 00:13:51.963 NGUID/EUI64 Never Reused: No 00:13:51.963 Namespace Write Protected: No 00:13:51.963 Number of LBA Formats: 8 00:13:51.963 Current LBA Format: LBA Format #04 00:13:51.963 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:51.963 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:51.963 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:51.963 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:51.963 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:51.963 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:51.963 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:51.963 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:51.963 00:13:51.963 NVM Specific Namespace Data 00:13:51.963 =========================== 00:13:51.963 Logical Block Storage Tag Mask: 0 00:13:51.963 Protection Information Capabilities: 00:13:51.963 16b Guard Protection Information Storage Tag Support: No 00:13:51.963 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:51.963 Storage Tag Check Read Support: No 00:13:51.963 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:51.963 Namespace ID:3 00:13:51.963 Error Recovery Timeout: Unlimited 00:13:51.963 Command Set Identifier: NVM (00h) 00:13:51.963 Deallocate: Supported 00:13:51.963 Deallocated/Unwritten Error: Supported 00:13:51.963 Deallocated Read Value: All 0x00 00:13:51.963 Deallocate in Write Zeroes: Not Supported 00:13:51.963 Deallocated Guard Field: 0xFFFF 00:13:51.963 Flush: Supported 00:13:51.963 Reservation: Not Supported 00:13:51.963 Namespace Sharing Capabilities: Private 00:13:51.963 Size (in LBAs): 1048576 (4GiB) 00:13:52.222 Capacity (in LBAs): 1048576 (4GiB) 00:13:52.222 Utilization (in LBAs): 1048576 (4GiB) 00:13:52.222 Thin Provisioning: Not Supported 00:13:52.222 Per-NS Atomic Units: No 00:13:52.222 Maximum Single Source Range Length: 128 00:13:52.222 Maximum Copy Length: 128 00:13:52.222 Maximum Source Range Count: 128 00:13:52.222 NGUID/EUI64 Never Reused: No 00:13:52.222 Namespace Write Protected: No 00:13:52.222 Number of LBA Formats: 8 00:13:52.222 Current LBA Format: LBA Format #04 00:13:52.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:52.222 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:52.222 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:52.222 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:52.222 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:52.222 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:52.222 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:52.222 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:52.222 00:13:52.222 NVM Specific Namespace Data 00:13:52.222 =========================== 00:13:52.222 Logical Block Storage Tag Mask: 0 00:13:52.222 Protection Information Capabilities: 00:13:52.222 16b Guard Protection Information Storage Tag Support: No 00:13:52.222 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:52.222 Storage Tag Check Read Support: No 00:13:52.222 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.222 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.222 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.222 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.223 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.223 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.223 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.223 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.223 13:08:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:52.223 13:08:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:13:52.482 ===================================================== 00:13:52.482 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:52.482 ===================================================== 00:13:52.482 Controller Capabilities/Features 00:13:52.482 ================================ 00:13:52.482 Vendor ID: 1b36 00:13:52.482 Subsystem Vendor ID: 1af4 00:13:52.482 Serial Number: 12340 00:13:52.482 Model Number: QEMU NVMe Ctrl 00:13:52.482 Firmware Version: 8.0.0 00:13:52.482 Recommended Arb Burst: 6 00:13:52.482 IEEE OUI Identifier: 00 54 52 00:13:52.482 Multi-path I/O 00:13:52.482 May have multiple subsystem ports: No 00:13:52.482 May have multiple controllers: No 00:13:52.482 Associated with SR-IOV VF: No 00:13:52.482 Max Data Transfer Size: 524288 00:13:52.482 Max Number of Namespaces: 256 00:13:52.482 Max Number of I/O Queues: 64 00:13:52.482 NVMe Specification Version (VS): 1.4 00:13:52.482 NVMe Specification Version (Identify): 1.4 00:13:52.482 Maximum Queue Entries: 2048 00:13:52.482 Contiguous Queues Required: Yes 00:13:52.482 Arbitration Mechanisms Supported 00:13:52.482 Weighted Round Robin: Not Supported 00:13:52.482 Vendor Specific: Not Supported 00:13:52.482 Reset Timeout: 7500 ms 00:13:52.482 Doorbell Stride: 4 bytes 00:13:52.482 NVM Subsystem Reset: Not Supported 00:13:52.482 Command Sets Supported 00:13:52.482 NVM Command Set: Supported 00:13:52.482 Boot Partition: Not Supported 00:13:52.482 Memory Page Size Minimum: 4096 bytes 00:13:52.482 Memory Page Size Maximum: 65536 bytes 00:13:52.482 Persistent Memory Region: Not Supported 00:13:52.482 Optional Asynchronous Events Supported 00:13:52.482 Namespace Attribute Notices: Supported 00:13:52.482 Firmware Activation Notices: Not Supported 00:13:52.482 ANA Change Notices: Not Supported 00:13:52.482 PLE Aggregate Log Change Notices: Not Supported 00:13:52.482 LBA Status Info Alert Notices: Not Supported 00:13:52.482 EGE Aggregate Log Change Notices: Not Supported 00:13:52.482 Normal NVM Subsystem Shutdown event: Not Supported 00:13:52.482 Zone Descriptor Change Notices: Not Supported 00:13:52.482 Discovery Log Change Notices: Not Supported 00:13:52.482 Controller Attributes 00:13:52.482 128-bit Host Identifier: Not Supported 00:13:52.482 Non-Operational Permissive Mode: Not Supported 00:13:52.482 NVM Sets: Not Supported 00:13:52.482 Read Recovery Levels: Not Supported 00:13:52.482 Endurance Groups: Not Supported 00:13:52.482 Predictable Latency Mode: Not Supported 00:13:52.482 Traffic Based Keep ALive: Not Supported 00:13:52.482 Namespace Granularity: Not Supported 00:13:52.482 SQ Associations: Not Supported 00:13:52.482 UUID List: Not Supported 00:13:52.482 Multi-Domain Subsystem: Not Supported 00:13:52.482 Fixed Capacity Management: Not Supported 00:13:52.482 Variable Capacity Management: Not Supported 00:13:52.482 Delete Endurance Group: Not Supported 00:13:52.482 Delete NVM Set: Not Supported 00:13:52.482 Extended LBA Formats Supported: Supported 00:13:52.482 Flexible Data Placement Supported: Not Supported 00:13:52.482 00:13:52.482 Controller Memory Buffer Support 00:13:52.482 ================================ 00:13:52.482 Supported: No 00:13:52.482 00:13:52.482 Persistent Memory Region Support 00:13:52.482 ================================ 00:13:52.482 Supported: No 00:13:52.482 00:13:52.482 Admin Command Set Attributes 00:13:52.482 ============================ 00:13:52.482 Security Send/Receive: Not Supported 00:13:52.482 Format NVM: Supported 00:13:52.482 Firmware Activate/Download: Not Supported 00:13:52.482 Namespace Management: Supported 00:13:52.482 Device Self-Test: Not Supported 00:13:52.482 Directives: Supported 00:13:52.482 NVMe-MI: Not Supported 00:13:52.482 Virtualization Management: Not Supported 00:13:52.482 Doorbell Buffer Config: Supported 00:13:52.482 Get LBA Status Capability: Not Supported 00:13:52.482 Command & Feature Lockdown Capability: Not Supported 00:13:52.482 Abort Command Limit: 4 00:13:52.482 Async Event Request Limit: 4 00:13:52.482 Number of Firmware Slots: N/A 00:13:52.482 Firmware Slot 1 Read-Only: N/A 00:13:52.482 Firmware Activation Without Reset: N/A 00:13:52.482 Multiple Update Detection Support: N/A 00:13:52.482 Firmware Update Granularity: No Information Provided 00:13:52.482 Per-Namespace SMART Log: Yes 00:13:52.482 Asymmetric Namespace Access Log Page: Not Supported 00:13:52.482 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:52.482 Command Effects Log Page: Supported 00:13:52.482 Get Log Page Extended Data: Supported 00:13:52.482 Telemetry Log Pages: Not Supported 00:13:52.482 Persistent Event Log Pages: Not Supported 00:13:52.482 Supported Log Pages Log Page: May Support 00:13:52.482 Commands Supported & Effects Log Page: Not Supported 00:13:52.482 Feature Identifiers & Effects Log Page:May Support 00:13:52.482 NVMe-MI Commands & Effects Log Page: May Support 00:13:52.482 Data Area 4 for Telemetry Log: Not Supported 00:13:52.482 Error Log Page Entries Supported: 1 00:13:52.482 Keep Alive: Not Supported 00:13:52.482 00:13:52.482 NVM Command Set Attributes 00:13:52.482 ========================== 00:13:52.482 Submission Queue Entry Size 00:13:52.482 Max: 64 00:13:52.482 Min: 64 00:13:52.482 Completion Queue Entry Size 00:13:52.482 Max: 16 00:13:52.482 Min: 16 00:13:52.482 Number of Namespaces: 256 00:13:52.482 Compare Command: Supported 00:13:52.482 Write Uncorrectable Command: Not Supported 00:13:52.482 Dataset Management Command: Supported 00:13:52.482 Write Zeroes Command: Supported 00:13:52.482 Set Features Save Field: Supported 00:13:52.482 Reservations: Not Supported 00:13:52.482 Timestamp: Supported 00:13:52.482 Copy: Supported 00:13:52.482 Volatile Write Cache: Present 00:13:52.482 Atomic Write Unit (Normal): 1 00:13:52.482 Atomic Write Unit (PFail): 1 00:13:52.482 Atomic Compare & Write Unit: 1 00:13:52.482 Fused Compare & Write: Not Supported 00:13:52.482 Scatter-Gather List 00:13:52.482 SGL Command Set: Supported 00:13:52.482 SGL Keyed: Not Supported 00:13:52.482 SGL Bit Bucket Descriptor: Not Supported 00:13:52.482 SGL Metadata Pointer: Not Supported 00:13:52.482 Oversized SGL: Not Supported 00:13:52.482 SGL Metadata Address: Not Supported 00:13:52.482 SGL Offset: Not Supported 00:13:52.482 Transport SGL Data Block: Not Supported 00:13:52.482 Replay Protected Memory Block: Not Supported 00:13:52.482 00:13:52.482 Firmware Slot Information 00:13:52.482 ========================= 00:13:52.482 Active slot: 1 00:13:52.482 Slot 1 Firmware Revision: 1.0 00:13:52.482 00:13:52.482 00:13:52.482 Commands Supported and Effects 00:13:52.482 ============================== 00:13:52.482 Admin Commands 00:13:52.482 -------------- 00:13:52.482 Delete I/O Submission Queue (00h): Supported 00:13:52.482 Create I/O Submission Queue (01h): Supported 00:13:52.482 Get Log Page (02h): Supported 00:13:52.482 Delete I/O Completion Queue (04h): Supported 00:13:52.482 Create I/O Completion Queue (05h): Supported 00:13:52.483 Identify (06h): Supported 00:13:52.483 Abort (08h): Supported 00:13:52.483 Set Features (09h): Supported 00:13:52.483 Get Features (0Ah): Supported 00:13:52.483 Asynchronous Event Request (0Ch): Supported 00:13:52.483 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:52.483 Directive Send (19h): Supported 00:13:52.483 Directive Receive (1Ah): Supported 00:13:52.483 Virtualization Management (1Ch): Supported 00:13:52.483 Doorbell Buffer Config (7Ch): Supported 00:13:52.483 Format NVM (80h): Supported LBA-Change 00:13:52.483 I/O Commands 00:13:52.483 ------------ 00:13:52.483 Flush (00h): Supported LBA-Change 00:13:52.483 Write (01h): Supported LBA-Change 00:13:52.483 Read (02h): Supported 00:13:52.483 Compare (05h): Supported 00:13:52.483 Write Zeroes (08h): Supported LBA-Change 00:13:52.483 Dataset Management (09h): Supported LBA-Change 00:13:52.483 Unknown (0Ch): Supported 00:13:52.483 Unknown (12h): Supported 00:13:52.483 Copy (19h): Supported LBA-Change 00:13:52.483 Unknown (1Dh): Supported LBA-Change 00:13:52.483 00:13:52.483 Error Log 00:13:52.483 ========= 00:13:52.483 00:13:52.483 Arbitration 00:13:52.483 =========== 00:13:52.483 Arbitration Burst: no limit 00:13:52.483 00:13:52.483 Power Management 00:13:52.483 ================ 00:13:52.483 Number of Power States: 1 00:13:52.483 Current Power State: Power State #0 00:13:52.483 Power State #0: 00:13:52.483 Max Power: 25.00 W 00:13:52.483 Non-Operational State: Operational 00:13:52.483 Entry Latency: 16 microseconds 00:13:52.483 Exit Latency: 4 microseconds 00:13:52.483 Relative Read Throughput: 0 00:13:52.483 Relative Read Latency: 0 00:13:52.483 Relative Write Throughput: 0 00:13:52.483 Relative Write Latency: 0 00:13:52.483 Idle Power: Not Reported 00:13:52.483 Active Power: Not Reported 00:13:52.483 Non-Operational Permissive Mode: Not Supported 00:13:52.483 00:13:52.483 Health Information 00:13:52.483 ================== 00:13:52.483 Critical Warnings: 00:13:52.483 Available Spare Space: OK 00:13:52.483 Temperature: OK 00:13:52.483 Device Reliability: OK 00:13:52.483 Read Only: No 00:13:52.483 Volatile Memory Backup: OK 00:13:52.483 Current Temperature: 323 Kelvin (50 Celsius) 00:13:52.483 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:52.483 Available Spare: 0% 00:13:52.483 Available Spare Threshold: 0% 00:13:52.483 Life Percentage Used: 0% 00:13:52.483 Data Units Read: 661 00:13:52.483 Data Units Written: 589 00:13:52.483 Host Read Commands: 32780 00:13:52.483 Host Write Commands: 32566 00:13:52.483 Controller Busy Time: 0 minutes 00:13:52.483 Power Cycles: 0 00:13:52.483 Power On Hours: 0 hours 00:13:52.483 Unsafe Shutdowns: 0 00:13:52.483 Unrecoverable Media Errors: 0 00:13:52.483 Lifetime Error Log Entries: 0 00:13:52.483 Warning Temperature Time: 0 minutes 00:13:52.483 Critical Temperature Time: 0 minutes 00:13:52.483 00:13:52.483 Number of Queues 00:13:52.483 ================ 00:13:52.483 Number of I/O Submission Queues: 64 00:13:52.483 Number of I/O Completion Queues: 64 00:13:52.483 00:13:52.483 ZNS Specific Controller Data 00:13:52.483 ============================ 00:13:52.483 Zone Append Size Limit: 0 00:13:52.483 00:13:52.483 00:13:52.483 Active Namespaces 00:13:52.483 ================= 00:13:52.483 Namespace ID:1 00:13:52.483 Error Recovery Timeout: Unlimited 00:13:52.483 Command Set Identifier: NVM (00h) 00:13:52.483 Deallocate: Supported 00:13:52.483 Deallocated/Unwritten Error: Supported 00:13:52.483 Deallocated Read Value: All 0x00 00:13:52.483 Deallocate in Write Zeroes: Not Supported 00:13:52.483 Deallocated Guard Field: 0xFFFF 00:13:52.483 Flush: Supported 00:13:52.483 Reservation: Not Supported 00:13:52.483 Metadata Transferred as: Separate Metadata Buffer 00:13:52.483 Namespace Sharing Capabilities: Private 00:13:52.483 Size (in LBAs): 1548666 (5GiB) 00:13:52.483 Capacity (in LBAs): 1548666 (5GiB) 00:13:52.483 Utilization (in LBAs): 1548666 (5GiB) 00:13:52.483 Thin Provisioning: Not Supported 00:13:52.483 Per-NS Atomic Units: No 00:13:52.483 Maximum Single Source Range Length: 128 00:13:52.483 Maximum Copy Length: 128 00:13:52.483 Maximum Source Range Count: 128 00:13:52.483 NGUID/EUI64 Never Reused: No 00:13:52.483 Namespace Write Protected: No 00:13:52.483 Number of LBA Formats: 8 00:13:52.483 Current LBA Format: LBA Format #07 00:13:52.483 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:52.483 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:52.483 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:52.483 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:52.483 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:52.483 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:52.483 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:52.483 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:52.483 00:13:52.483 NVM Specific Namespace Data 00:13:52.483 =========================== 00:13:52.483 Logical Block Storage Tag Mask: 0 00:13:52.483 Protection Information Capabilities: 00:13:52.483 16b Guard Protection Information Storage Tag Support: No 00:13:52.483 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:52.483 Storage Tag Check Read Support: No 00:13:52.483 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.483 13:08:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:52.483 13:08:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:13:52.742 ===================================================== 00:13:52.742 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:52.742 ===================================================== 00:13:52.742 Controller Capabilities/Features 00:13:52.742 ================================ 00:13:52.742 Vendor ID: 1b36 00:13:52.742 Subsystem Vendor ID: 1af4 00:13:52.742 Serial Number: 12341 00:13:52.742 Model Number: QEMU NVMe Ctrl 00:13:52.742 Firmware Version: 8.0.0 00:13:52.742 Recommended Arb Burst: 6 00:13:52.742 IEEE OUI Identifier: 00 54 52 00:13:52.742 Multi-path I/O 00:13:52.742 May have multiple subsystem ports: No 00:13:52.742 May have multiple controllers: No 00:13:52.742 Associated with SR-IOV VF: No 00:13:52.742 Max Data Transfer Size: 524288 00:13:52.742 Max Number of Namespaces: 256 00:13:52.742 Max Number of I/O Queues: 64 00:13:52.742 NVMe Specification Version (VS): 1.4 00:13:52.742 NVMe Specification Version (Identify): 1.4 00:13:52.742 Maximum Queue Entries: 2048 00:13:52.742 Contiguous Queues Required: Yes 00:13:52.742 Arbitration Mechanisms Supported 00:13:52.742 Weighted Round Robin: Not Supported 00:13:52.742 Vendor Specific: Not Supported 00:13:52.742 Reset Timeout: 7500 ms 00:13:52.742 Doorbell Stride: 4 bytes 00:13:52.742 NVM Subsystem Reset: Not Supported 00:13:52.742 Command Sets Supported 00:13:52.742 NVM Command Set: Supported 00:13:52.742 Boot Partition: Not Supported 00:13:52.742 Memory Page Size Minimum: 4096 bytes 00:13:52.742 Memory Page Size Maximum: 65536 bytes 00:13:52.742 Persistent Memory Region: Not Supported 00:13:52.742 Optional Asynchronous Events Supported 00:13:52.742 Namespace Attribute Notices: Supported 00:13:52.742 Firmware Activation Notices: Not Supported 00:13:52.742 ANA Change Notices: Not Supported 00:13:52.742 PLE Aggregate Log Change Notices: Not Supported 00:13:52.742 LBA Status Info Alert Notices: Not Supported 00:13:52.742 EGE Aggregate Log Change Notices: Not Supported 00:13:52.742 Normal NVM Subsystem Shutdown event: Not Supported 00:13:52.742 Zone Descriptor Change Notices: Not Supported 00:13:52.742 Discovery Log Change Notices: Not Supported 00:13:52.742 Controller Attributes 00:13:52.742 128-bit Host Identifier: Not Supported 00:13:52.742 Non-Operational Permissive Mode: Not Supported 00:13:52.742 NVM Sets: Not Supported 00:13:52.742 Read Recovery Levels: Not Supported 00:13:52.742 Endurance Groups: Not Supported 00:13:52.742 Predictable Latency Mode: Not Supported 00:13:52.742 Traffic Based Keep ALive: Not Supported 00:13:52.742 Namespace Granularity: Not Supported 00:13:52.742 SQ Associations: Not Supported 00:13:52.742 UUID List: Not Supported 00:13:52.742 Multi-Domain Subsystem: Not Supported 00:13:52.742 Fixed Capacity Management: Not Supported 00:13:52.742 Variable Capacity Management: Not Supported 00:13:52.742 Delete Endurance Group: Not Supported 00:13:52.742 Delete NVM Set: Not Supported 00:13:52.742 Extended LBA Formats Supported: Supported 00:13:52.742 Flexible Data Placement Supported: Not Supported 00:13:52.742 00:13:52.742 Controller Memory Buffer Support 00:13:52.742 ================================ 00:13:52.742 Supported: No 00:13:52.742 00:13:52.742 Persistent Memory Region Support 00:13:52.742 ================================ 00:13:52.742 Supported: No 00:13:52.742 00:13:52.742 Admin Command Set Attributes 00:13:52.742 ============================ 00:13:52.742 Security Send/Receive: Not Supported 00:13:52.742 Format NVM: Supported 00:13:52.742 Firmware Activate/Download: Not Supported 00:13:52.742 Namespace Management: Supported 00:13:52.742 Device Self-Test: Not Supported 00:13:52.742 Directives: Supported 00:13:52.742 NVMe-MI: Not Supported 00:13:52.742 Virtualization Management: Not Supported 00:13:52.742 Doorbell Buffer Config: Supported 00:13:52.742 Get LBA Status Capability: Not Supported 00:13:52.742 Command & Feature Lockdown Capability: Not Supported 00:13:52.742 Abort Command Limit: 4 00:13:52.742 Async Event Request Limit: 4 00:13:52.742 Number of Firmware Slots: N/A 00:13:52.742 Firmware Slot 1 Read-Only: N/A 00:13:52.742 Firmware Activation Without Reset: N/A 00:13:52.742 Multiple Update Detection Support: N/A 00:13:52.742 Firmware Update Granularity: No Information Provided 00:13:52.742 Per-Namespace SMART Log: Yes 00:13:52.742 Asymmetric Namespace Access Log Page: Not Supported 00:13:52.742 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:52.742 Command Effects Log Page: Supported 00:13:52.742 Get Log Page Extended Data: Supported 00:13:52.742 Telemetry Log Pages: Not Supported 00:13:52.742 Persistent Event Log Pages: Not Supported 00:13:52.742 Supported Log Pages Log Page: May Support 00:13:52.742 Commands Supported & Effects Log Page: Not Supported 00:13:52.742 Feature Identifiers & Effects Log Page:May Support 00:13:52.742 NVMe-MI Commands & Effects Log Page: May Support 00:13:52.742 Data Area 4 for Telemetry Log: Not Supported 00:13:52.742 Error Log Page Entries Supported: 1 00:13:52.742 Keep Alive: Not Supported 00:13:52.742 00:13:52.742 NVM Command Set Attributes 00:13:52.742 ========================== 00:13:52.742 Submission Queue Entry Size 00:13:52.742 Max: 64 00:13:52.742 Min: 64 00:13:52.742 Completion Queue Entry Size 00:13:52.742 Max: 16 00:13:52.742 Min: 16 00:13:52.742 Number of Namespaces: 256 00:13:52.742 Compare Command: Supported 00:13:52.742 Write Uncorrectable Command: Not Supported 00:13:52.742 Dataset Management Command: Supported 00:13:52.742 Write Zeroes Command: Supported 00:13:52.742 Set Features Save Field: Supported 00:13:52.742 Reservations: Not Supported 00:13:52.742 Timestamp: Supported 00:13:52.742 Copy: Supported 00:13:52.742 Volatile Write Cache: Present 00:13:52.742 Atomic Write Unit (Normal): 1 00:13:52.742 Atomic Write Unit (PFail): 1 00:13:52.742 Atomic Compare & Write Unit: 1 00:13:52.742 Fused Compare & Write: Not Supported 00:13:52.742 Scatter-Gather List 00:13:52.742 SGL Command Set: Supported 00:13:52.742 SGL Keyed: Not Supported 00:13:52.742 SGL Bit Bucket Descriptor: Not Supported 00:13:52.742 SGL Metadata Pointer: Not Supported 00:13:52.742 Oversized SGL: Not Supported 00:13:52.742 SGL Metadata Address: Not Supported 00:13:52.742 SGL Offset: Not Supported 00:13:52.742 Transport SGL Data Block: Not Supported 00:13:52.742 Replay Protected Memory Block: Not Supported 00:13:52.742 00:13:52.742 Firmware Slot Information 00:13:52.742 ========================= 00:13:52.742 Active slot: 1 00:13:52.742 Slot 1 Firmware Revision: 1.0 00:13:52.742 00:13:52.742 00:13:52.742 Commands Supported and Effects 00:13:52.742 ============================== 00:13:52.742 Admin Commands 00:13:52.742 -------------- 00:13:52.742 Delete I/O Submission Queue (00h): Supported 00:13:52.742 Create I/O Submission Queue (01h): Supported 00:13:52.742 Get Log Page (02h): Supported 00:13:52.742 Delete I/O Completion Queue (04h): Supported 00:13:52.742 Create I/O Completion Queue (05h): Supported 00:13:52.742 Identify (06h): Supported 00:13:52.742 Abort (08h): Supported 00:13:52.742 Set Features (09h): Supported 00:13:52.742 Get Features (0Ah): Supported 00:13:52.742 Asynchronous Event Request (0Ch): Supported 00:13:52.742 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:52.742 Directive Send (19h): Supported 00:13:52.742 Directive Receive (1Ah): Supported 00:13:52.742 Virtualization Management (1Ch): Supported 00:13:52.742 Doorbell Buffer Config (7Ch): Supported 00:13:52.742 Format NVM (80h): Supported LBA-Change 00:13:52.742 I/O Commands 00:13:52.742 ------------ 00:13:52.742 Flush (00h): Supported LBA-Change 00:13:52.742 Write (01h): Supported LBA-Change 00:13:52.742 Read (02h): Supported 00:13:52.742 Compare (05h): Supported 00:13:52.743 Write Zeroes (08h): Supported LBA-Change 00:13:52.743 Dataset Management (09h): Supported LBA-Change 00:13:52.743 Unknown (0Ch): Supported 00:13:52.743 Unknown (12h): Supported 00:13:52.743 Copy (19h): Supported LBA-Change 00:13:52.743 Unknown (1Dh): Supported LBA-Change 00:13:52.743 00:13:52.743 Error Log 00:13:52.743 ========= 00:13:52.743 00:13:52.743 Arbitration 00:13:52.743 =========== 00:13:52.743 Arbitration Burst: no limit 00:13:52.743 00:13:52.743 Power Management 00:13:52.743 ================ 00:13:52.743 Number of Power States: 1 00:13:52.743 Current Power State: Power State #0 00:13:52.743 Power State #0: 00:13:52.743 Max Power: 25.00 W 00:13:52.743 Non-Operational State: Operational 00:13:52.743 Entry Latency: 16 microseconds 00:13:52.743 Exit Latency: 4 microseconds 00:13:52.743 Relative Read Throughput: 0 00:13:52.743 Relative Read Latency: 0 00:13:52.743 Relative Write Throughput: 0 00:13:52.743 Relative Write Latency: 0 00:13:52.743 Idle Power: Not Reported 00:13:52.743 Active Power: Not Reported 00:13:52.743 Non-Operational Permissive Mode: Not Supported 00:13:52.743 00:13:52.743 Health Information 00:13:52.743 ================== 00:13:52.743 Critical Warnings: 00:13:52.743 Available Spare Space: OK 00:13:52.743 Temperature: OK 00:13:52.743 Device Reliability: OK 00:13:52.743 Read Only: No 00:13:52.743 Volatile Memory Backup: OK 00:13:52.743 Current Temperature: 323 Kelvin (50 Celsius) 00:13:52.743 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:52.743 Available Spare: 0% 00:13:52.743 Available Spare Threshold: 0% 00:13:52.743 Life Percentage Used: 0% 00:13:52.743 Data Units Read: 1009 00:13:52.743 Data Units Written: 876 00:13:52.743 Host Read Commands: 48251 00:13:52.743 Host Write Commands: 47052 00:13:52.743 Controller Busy Time: 0 minutes 00:13:52.743 Power Cycles: 0 00:13:52.743 Power On Hours: 0 hours 00:13:52.743 Unsafe Shutdowns: 0 00:13:52.743 Unrecoverable Media Errors: 0 00:13:52.743 Lifetime Error Log Entries: 0 00:13:52.743 Warning Temperature Time: 0 minutes 00:13:52.743 Critical Temperature Time: 0 minutes 00:13:52.743 00:13:52.743 Number of Queues 00:13:52.743 ================ 00:13:52.743 Number of I/O Submission Queues: 64 00:13:52.743 Number of I/O Completion Queues: 64 00:13:52.743 00:13:52.743 ZNS Specific Controller Data 00:13:52.743 ============================ 00:13:52.743 Zone Append Size Limit: 0 00:13:52.743 00:13:52.743 00:13:52.743 Active Namespaces 00:13:52.743 ================= 00:13:52.743 Namespace ID:1 00:13:52.743 Error Recovery Timeout: Unlimited 00:13:52.743 Command Set Identifier: NVM (00h) 00:13:52.743 Deallocate: Supported 00:13:52.743 Deallocated/Unwritten Error: Supported 00:13:52.743 Deallocated Read Value: All 0x00 00:13:52.743 Deallocate in Write Zeroes: Not Supported 00:13:52.743 Deallocated Guard Field: 0xFFFF 00:13:52.743 Flush: Supported 00:13:52.743 Reservation: Not Supported 00:13:52.743 Namespace Sharing Capabilities: Private 00:13:52.743 Size (in LBAs): 1310720 (5GiB) 00:13:52.743 Capacity (in LBAs): 1310720 (5GiB) 00:13:52.743 Utilization (in LBAs): 1310720 (5GiB) 00:13:52.743 Thin Provisioning: Not Supported 00:13:52.743 Per-NS Atomic Units: No 00:13:52.743 Maximum Single Source Range Length: 128 00:13:52.743 Maximum Copy Length: 128 00:13:52.743 Maximum Source Range Count: 128 00:13:52.743 NGUID/EUI64 Never Reused: No 00:13:52.743 Namespace Write Protected: No 00:13:52.743 Number of LBA Formats: 8 00:13:52.743 Current LBA Format: LBA Format #04 00:13:52.743 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:52.743 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:52.743 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:52.743 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:52.743 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:52.743 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:52.743 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:52.743 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:52.743 00:13:52.743 NVM Specific Namespace Data 00:13:52.743 =========================== 00:13:52.743 Logical Block Storage Tag Mask: 0 00:13:52.743 Protection Information Capabilities: 00:13:52.743 16b Guard Protection Information Storage Tag Support: No 00:13:52.743 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:52.743 Storage Tag Check Read Support: No 00:13:52.743 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:52.743 13:08:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:52.743 13:08:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:13:53.310 ===================================================== 00:13:53.310 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:53.310 ===================================================== 00:13:53.310 Controller Capabilities/Features 00:13:53.310 ================================ 00:13:53.310 Vendor ID: 1b36 00:13:53.310 Subsystem Vendor ID: 1af4 00:13:53.310 Serial Number: 12342 00:13:53.310 Model Number: QEMU NVMe Ctrl 00:13:53.310 Firmware Version: 8.0.0 00:13:53.310 Recommended Arb Burst: 6 00:13:53.311 IEEE OUI Identifier: 00 54 52 00:13:53.311 Multi-path I/O 00:13:53.311 May have multiple subsystem ports: No 00:13:53.311 May have multiple controllers: No 00:13:53.311 Associated with SR-IOV VF: No 00:13:53.311 Max Data Transfer Size: 524288 00:13:53.311 Max Number of Namespaces: 256 00:13:53.311 Max Number of I/O Queues: 64 00:13:53.311 NVMe Specification Version (VS): 1.4 00:13:53.311 NVMe Specification Version (Identify): 1.4 00:13:53.311 Maximum Queue Entries: 2048 00:13:53.311 Contiguous Queues Required: Yes 00:13:53.311 Arbitration Mechanisms Supported 00:13:53.311 Weighted Round Robin: Not Supported 00:13:53.311 Vendor Specific: Not Supported 00:13:53.311 Reset Timeout: 7500 ms 00:13:53.311 Doorbell Stride: 4 bytes 00:13:53.311 NVM Subsystem Reset: Not Supported 00:13:53.311 Command Sets Supported 00:13:53.311 NVM Command Set: Supported 00:13:53.311 Boot Partition: Not Supported 00:13:53.311 Memory Page Size Minimum: 4096 bytes 00:13:53.311 Memory Page Size Maximum: 65536 bytes 00:13:53.311 Persistent Memory Region: Not Supported 00:13:53.311 Optional Asynchronous Events Supported 00:13:53.311 Namespace Attribute Notices: Supported 00:13:53.311 Firmware Activation Notices: Not Supported 00:13:53.311 ANA Change Notices: Not Supported 00:13:53.311 PLE Aggregate Log Change Notices: Not Supported 00:13:53.311 LBA Status Info Alert Notices: Not Supported 00:13:53.311 EGE Aggregate Log Change Notices: Not Supported 00:13:53.311 Normal NVM Subsystem Shutdown event: Not Supported 00:13:53.311 Zone Descriptor Change Notices: Not Supported 00:13:53.311 Discovery Log Change Notices: Not Supported 00:13:53.311 Controller Attributes 00:13:53.311 128-bit Host Identifier: Not Supported 00:13:53.311 Non-Operational Permissive Mode: Not Supported 00:13:53.311 NVM Sets: Not Supported 00:13:53.311 Read Recovery Levels: Not Supported 00:13:53.311 Endurance Groups: Not Supported 00:13:53.311 Predictable Latency Mode: Not Supported 00:13:53.311 Traffic Based Keep ALive: Not Supported 00:13:53.311 Namespace Granularity: Not Supported 00:13:53.311 SQ Associations: Not Supported 00:13:53.311 UUID List: Not Supported 00:13:53.311 Multi-Domain Subsystem: Not Supported 00:13:53.311 Fixed Capacity Management: Not Supported 00:13:53.311 Variable Capacity Management: Not Supported 00:13:53.311 Delete Endurance Group: Not Supported 00:13:53.311 Delete NVM Set: Not Supported 00:13:53.311 Extended LBA Formats Supported: Supported 00:13:53.311 Flexible Data Placement Supported: Not Supported 00:13:53.311 00:13:53.311 Controller Memory Buffer Support 00:13:53.311 ================================ 00:13:53.311 Supported: No 00:13:53.311 00:13:53.311 Persistent Memory Region Support 00:13:53.311 ================================ 00:13:53.311 Supported: No 00:13:53.311 00:13:53.311 Admin Command Set Attributes 00:13:53.311 ============================ 00:13:53.311 Security Send/Receive: Not Supported 00:13:53.311 Format NVM: Supported 00:13:53.311 Firmware Activate/Download: Not Supported 00:13:53.311 Namespace Management: Supported 00:13:53.311 Device Self-Test: Not Supported 00:13:53.311 Directives: Supported 00:13:53.311 NVMe-MI: Not Supported 00:13:53.311 Virtualization Management: Not Supported 00:13:53.311 Doorbell Buffer Config: Supported 00:13:53.311 Get LBA Status Capability: Not Supported 00:13:53.311 Command & Feature Lockdown Capability: Not Supported 00:13:53.311 Abort Command Limit: 4 00:13:53.311 Async Event Request Limit: 4 00:13:53.311 Number of Firmware Slots: N/A 00:13:53.311 Firmware Slot 1 Read-Only: N/A 00:13:53.311 Firmware Activation Without Reset: N/A 00:13:53.311 Multiple Update Detection Support: N/A 00:13:53.311 Firmware Update Granularity: No Information Provided 00:13:53.311 Per-Namespace SMART Log: Yes 00:13:53.311 Asymmetric Namespace Access Log Page: Not Supported 00:13:53.311 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:53.311 Command Effects Log Page: Supported 00:13:53.311 Get Log Page Extended Data: Supported 00:13:53.311 Telemetry Log Pages: Not Supported 00:13:53.311 Persistent Event Log Pages: Not Supported 00:13:53.311 Supported Log Pages Log Page: May Support 00:13:53.311 Commands Supported & Effects Log Page: Not Supported 00:13:53.311 Feature Identifiers & Effects Log Page:May Support 00:13:53.311 NVMe-MI Commands & Effects Log Page: May Support 00:13:53.311 Data Area 4 for Telemetry Log: Not Supported 00:13:53.311 Error Log Page Entries Supported: 1 00:13:53.311 Keep Alive: Not Supported 00:13:53.311 00:13:53.311 NVM Command Set Attributes 00:13:53.311 ========================== 00:13:53.311 Submission Queue Entry Size 00:13:53.311 Max: 64 00:13:53.311 Min: 64 00:13:53.311 Completion Queue Entry Size 00:13:53.311 Max: 16 00:13:53.311 Min: 16 00:13:53.311 Number of Namespaces: 256 00:13:53.311 Compare Command: Supported 00:13:53.311 Write Uncorrectable Command: Not Supported 00:13:53.311 Dataset Management Command: Supported 00:13:53.311 Write Zeroes Command: Supported 00:13:53.311 Set Features Save Field: Supported 00:13:53.311 Reservations: Not Supported 00:13:53.311 Timestamp: Supported 00:13:53.311 Copy: Supported 00:13:53.311 Volatile Write Cache: Present 00:13:53.311 Atomic Write Unit (Normal): 1 00:13:53.311 Atomic Write Unit (PFail): 1 00:13:53.311 Atomic Compare & Write Unit: 1 00:13:53.311 Fused Compare & Write: Not Supported 00:13:53.311 Scatter-Gather List 00:13:53.311 SGL Command Set: Supported 00:13:53.311 SGL Keyed: Not Supported 00:13:53.311 SGL Bit Bucket Descriptor: Not Supported 00:13:53.311 SGL Metadata Pointer: Not Supported 00:13:53.311 Oversized SGL: Not Supported 00:13:53.311 SGL Metadata Address: Not Supported 00:13:53.311 SGL Offset: Not Supported 00:13:53.311 Transport SGL Data Block: Not Supported 00:13:53.311 Replay Protected Memory Block: Not Supported 00:13:53.311 00:13:53.311 Firmware Slot Information 00:13:53.311 ========================= 00:13:53.311 Active slot: 1 00:13:53.311 Slot 1 Firmware Revision: 1.0 00:13:53.311 00:13:53.311 00:13:53.311 Commands Supported and Effects 00:13:53.311 ============================== 00:13:53.311 Admin Commands 00:13:53.311 -------------- 00:13:53.311 Delete I/O Submission Queue (00h): Supported 00:13:53.311 Create I/O Submission Queue (01h): Supported 00:13:53.311 Get Log Page (02h): Supported 00:13:53.311 Delete I/O Completion Queue (04h): Supported 00:13:53.311 Create I/O Completion Queue (05h): Supported 00:13:53.311 Identify (06h): Supported 00:13:53.311 Abort (08h): Supported 00:13:53.311 Set Features (09h): Supported 00:13:53.311 Get Features (0Ah): Supported 00:13:53.311 Asynchronous Event Request (0Ch): Supported 00:13:53.311 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:53.311 Directive Send (19h): Supported 00:13:53.311 Directive Receive (1Ah): Supported 00:13:53.311 Virtualization Management (1Ch): Supported 00:13:53.311 Doorbell Buffer Config (7Ch): Supported 00:13:53.311 Format NVM (80h): Supported LBA-Change 00:13:53.311 I/O Commands 00:13:53.311 ------------ 00:13:53.311 Flush (00h): Supported LBA-Change 00:13:53.311 Write (01h): Supported LBA-Change 00:13:53.311 Read (02h): Supported 00:13:53.311 Compare (05h): Supported 00:13:53.311 Write Zeroes (08h): Supported LBA-Change 00:13:53.311 Dataset Management (09h): Supported LBA-Change 00:13:53.311 Unknown (0Ch): Supported 00:13:53.311 Unknown (12h): Supported 00:13:53.311 Copy (19h): Supported LBA-Change 00:13:53.311 Unknown (1Dh): Supported LBA-Change 00:13:53.311 00:13:53.311 Error Log 00:13:53.311 ========= 00:13:53.311 00:13:53.311 Arbitration 00:13:53.311 =========== 00:13:53.311 Arbitration Burst: no limit 00:13:53.311 00:13:53.311 Power Management 00:13:53.311 ================ 00:13:53.311 Number of Power States: 1 00:13:53.311 Current Power State: Power State #0 00:13:53.311 Power State #0: 00:13:53.311 Max Power: 25.00 W 00:13:53.311 Non-Operational State: Operational 00:13:53.311 Entry Latency: 16 microseconds 00:13:53.311 Exit Latency: 4 microseconds 00:13:53.311 Relative Read Throughput: 0 00:13:53.311 Relative Read Latency: 0 00:13:53.311 Relative Write Throughput: 0 00:13:53.311 Relative Write Latency: 0 00:13:53.311 Idle Power: Not Reported 00:13:53.311 Active Power: Not Reported 00:13:53.311 Non-Operational Permissive Mode: Not Supported 00:13:53.311 00:13:53.311 Health Information 00:13:53.311 ================== 00:13:53.311 Critical Warnings: 00:13:53.311 Available Spare Space: OK 00:13:53.311 Temperature: OK 00:13:53.311 Device Reliability: OK 00:13:53.311 Read Only: No 00:13:53.311 Volatile Memory Backup: OK 00:13:53.312 Current Temperature: 323 Kelvin (50 Celsius) 00:13:53.312 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:53.312 Available Spare: 0% 00:13:53.312 Available Spare Threshold: 0% 00:13:53.312 Life Percentage Used: 0% 00:13:53.312 Data Units Read: 2084 00:13:53.312 Data Units Written: 1871 00:13:53.312 Host Read Commands: 99562 00:13:53.312 Host Write Commands: 97831 00:13:53.312 Controller Busy Time: 0 minutes 00:13:53.312 Power Cycles: 0 00:13:53.312 Power On Hours: 0 hours 00:13:53.312 Unsafe Shutdowns: 0 00:13:53.312 Unrecoverable Media Errors: 0 00:13:53.312 Lifetime Error Log Entries: 0 00:13:53.312 Warning Temperature Time: 0 minutes 00:13:53.312 Critical Temperature Time: 0 minutes 00:13:53.312 00:13:53.312 Number of Queues 00:13:53.312 ================ 00:13:53.312 Number of I/O Submission Queues: 64 00:13:53.312 Number of I/O Completion Queues: 64 00:13:53.312 00:13:53.312 ZNS Specific Controller Data 00:13:53.312 ============================ 00:13:53.312 Zone Append Size Limit: 0 00:13:53.312 00:13:53.312 00:13:53.312 Active Namespaces 00:13:53.312 ================= 00:13:53.312 Namespace ID:1 00:13:53.312 Error Recovery Timeout: Unlimited 00:13:53.312 Command Set Identifier: NVM (00h) 00:13:53.312 Deallocate: Supported 00:13:53.312 Deallocated/Unwritten Error: Supported 00:13:53.312 Deallocated Read Value: All 0x00 00:13:53.312 Deallocate in Write Zeroes: Not Supported 00:13:53.312 Deallocated Guard Field: 0xFFFF 00:13:53.312 Flush: Supported 00:13:53.312 Reservation: Not Supported 00:13:53.312 Namespace Sharing Capabilities: Private 00:13:53.312 Size (in LBAs): 1048576 (4GiB) 00:13:53.312 Capacity (in LBAs): 1048576 (4GiB) 00:13:53.312 Utilization (in LBAs): 1048576 (4GiB) 00:13:53.312 Thin Provisioning: Not Supported 00:13:53.312 Per-NS Atomic Units: No 00:13:53.312 Maximum Single Source Range Length: 128 00:13:53.312 Maximum Copy Length: 128 00:13:53.312 Maximum Source Range Count: 128 00:13:53.312 NGUID/EUI64 Never Reused: No 00:13:53.312 Namespace Write Protected: No 00:13:53.312 Number of LBA Formats: 8 00:13:53.312 Current LBA Format: LBA Format #04 00:13:53.312 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:53.312 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:53.312 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:53.312 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:53.312 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:53.312 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:53.312 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:53.312 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:53.312 00:13:53.312 NVM Specific Namespace Data 00:13:53.312 =========================== 00:13:53.312 Logical Block Storage Tag Mask: 0 00:13:53.312 Protection Information Capabilities: 00:13:53.312 16b Guard Protection Information Storage Tag Support: No 00:13:53.312 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:53.312 Storage Tag Check Read Support: No 00:13:53.312 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Namespace ID:2 00:13:53.312 Error Recovery Timeout: Unlimited 00:13:53.312 Command Set Identifier: NVM (00h) 00:13:53.312 Deallocate: Supported 00:13:53.312 Deallocated/Unwritten Error: Supported 00:13:53.312 Deallocated Read Value: All 0x00 00:13:53.312 Deallocate in Write Zeroes: Not Supported 00:13:53.312 Deallocated Guard Field: 0xFFFF 00:13:53.312 Flush: Supported 00:13:53.312 Reservation: Not Supported 00:13:53.312 Namespace Sharing Capabilities: Private 00:13:53.312 Size (in LBAs): 1048576 (4GiB) 00:13:53.312 Capacity (in LBAs): 1048576 (4GiB) 00:13:53.312 Utilization (in LBAs): 1048576 (4GiB) 00:13:53.312 Thin Provisioning: Not Supported 00:13:53.312 Per-NS Atomic Units: No 00:13:53.312 Maximum Single Source Range Length: 128 00:13:53.312 Maximum Copy Length: 128 00:13:53.312 Maximum Source Range Count: 128 00:13:53.312 NGUID/EUI64 Never Reused: No 00:13:53.312 Namespace Write Protected: No 00:13:53.312 Number of LBA Formats: 8 00:13:53.312 Current LBA Format: LBA Format #04 00:13:53.312 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:53.312 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:53.312 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:53.312 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:53.312 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:53.312 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:53.312 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:53.312 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:53.312 00:13:53.312 NVM Specific Namespace Data 00:13:53.312 =========================== 00:13:53.312 Logical Block Storage Tag Mask: 0 00:13:53.312 Protection Information Capabilities: 00:13:53.312 16b Guard Protection Information Storage Tag Support: No 00:13:53.312 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:53.312 Storage Tag Check Read Support: No 00:13:53.312 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Namespace ID:3 00:13:53.312 Error Recovery Timeout: Unlimited 00:13:53.312 Command Set Identifier: NVM (00h) 00:13:53.312 Deallocate: Supported 00:13:53.312 Deallocated/Unwritten Error: Supported 00:13:53.312 Deallocated Read Value: All 0x00 00:13:53.312 Deallocate in Write Zeroes: Not Supported 00:13:53.312 Deallocated Guard Field: 0xFFFF 00:13:53.312 Flush: Supported 00:13:53.312 Reservation: Not Supported 00:13:53.312 Namespace Sharing Capabilities: Private 00:13:53.312 Size (in LBAs): 1048576 (4GiB) 00:13:53.312 Capacity (in LBAs): 1048576 (4GiB) 00:13:53.312 Utilization (in LBAs): 1048576 (4GiB) 00:13:53.312 Thin Provisioning: Not Supported 00:13:53.312 Per-NS Atomic Units: No 00:13:53.312 Maximum Single Source Range Length: 128 00:13:53.312 Maximum Copy Length: 128 00:13:53.312 Maximum Source Range Count: 128 00:13:53.312 NGUID/EUI64 Never Reused: No 00:13:53.312 Namespace Write Protected: No 00:13:53.312 Number of LBA Formats: 8 00:13:53.312 Current LBA Format: LBA Format #04 00:13:53.312 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:53.312 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:53.312 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:53.312 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:53.312 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:53.312 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:53.312 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:53.312 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:53.312 00:13:53.312 NVM Specific Namespace Data 00:13:53.312 =========================== 00:13:53.312 Logical Block Storage Tag Mask: 0 00:13:53.312 Protection Information Capabilities: 00:13:53.312 16b Guard Protection Information Storage Tag Support: No 00:13:53.312 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:53.312 Storage Tag Check Read Support: No 00:13:53.312 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.312 13:08:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:53.312 13:08:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:13:53.571 ===================================================== 00:13:53.571 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:53.571 ===================================================== 00:13:53.571 Controller Capabilities/Features 00:13:53.571 ================================ 00:13:53.571 Vendor ID: 1b36 00:13:53.571 Subsystem Vendor ID: 1af4 00:13:53.571 Serial Number: 12343 00:13:53.571 Model Number: QEMU NVMe Ctrl 00:13:53.571 Firmware Version: 8.0.0 00:13:53.571 Recommended Arb Burst: 6 00:13:53.571 IEEE OUI Identifier: 00 54 52 00:13:53.571 Multi-path I/O 00:13:53.571 May have multiple subsystem ports: No 00:13:53.571 May have multiple controllers: Yes 00:13:53.571 Associated with SR-IOV VF: No 00:13:53.571 Max Data Transfer Size: 524288 00:13:53.571 Max Number of Namespaces: 256 00:13:53.571 Max Number of I/O Queues: 64 00:13:53.571 NVMe Specification Version (VS): 1.4 00:13:53.571 NVMe Specification Version (Identify): 1.4 00:13:53.571 Maximum Queue Entries: 2048 00:13:53.571 Contiguous Queues Required: Yes 00:13:53.571 Arbitration Mechanisms Supported 00:13:53.571 Weighted Round Robin: Not Supported 00:13:53.571 Vendor Specific: Not Supported 00:13:53.571 Reset Timeout: 7500 ms 00:13:53.571 Doorbell Stride: 4 bytes 00:13:53.571 NVM Subsystem Reset: Not Supported 00:13:53.571 Command Sets Supported 00:13:53.571 NVM Command Set: Supported 00:13:53.571 Boot Partition: Not Supported 00:13:53.571 Memory Page Size Minimum: 4096 bytes 00:13:53.571 Memory Page Size Maximum: 65536 bytes 00:13:53.571 Persistent Memory Region: Not Supported 00:13:53.571 Optional Asynchronous Events Supported 00:13:53.571 Namespace Attribute Notices: Supported 00:13:53.571 Firmware Activation Notices: Not Supported 00:13:53.571 ANA Change Notices: Not Supported 00:13:53.571 PLE Aggregate Log Change Notices: Not Supported 00:13:53.571 LBA Status Info Alert Notices: Not Supported 00:13:53.571 EGE Aggregate Log Change Notices: Not Supported 00:13:53.571 Normal NVM Subsystem Shutdown event: Not Supported 00:13:53.571 Zone Descriptor Change Notices: Not Supported 00:13:53.571 Discovery Log Change Notices: Not Supported 00:13:53.571 Controller Attributes 00:13:53.571 128-bit Host Identifier: Not Supported 00:13:53.571 Non-Operational Permissive Mode: Not Supported 00:13:53.571 NVM Sets: Not Supported 00:13:53.571 Read Recovery Levels: Not Supported 00:13:53.571 Endurance Groups: Supported 00:13:53.571 Predictable Latency Mode: Not Supported 00:13:53.571 Traffic Based Keep ALive: Not Supported 00:13:53.571 Namespace Granularity: Not Supported 00:13:53.571 SQ Associations: Not Supported 00:13:53.571 UUID List: Not Supported 00:13:53.571 Multi-Domain Subsystem: Not Supported 00:13:53.571 Fixed Capacity Management: Not Supported 00:13:53.571 Variable Capacity Management: Not Supported 00:13:53.571 Delete Endurance Group: Not Supported 00:13:53.571 Delete NVM Set: Not Supported 00:13:53.571 Extended LBA Formats Supported: Supported 00:13:53.571 Flexible Data Placement Supported: Supported 00:13:53.571 00:13:53.571 Controller Memory Buffer Support 00:13:53.571 ================================ 00:13:53.571 Supported: No 00:13:53.571 00:13:53.571 Persistent Memory Region Support 00:13:53.571 ================================ 00:13:53.571 Supported: No 00:13:53.571 00:13:53.571 Admin Command Set Attributes 00:13:53.571 ============================ 00:13:53.571 Security Send/Receive: Not Supported 00:13:53.571 Format NVM: Supported 00:13:53.571 Firmware Activate/Download: Not Supported 00:13:53.571 Namespace Management: Supported 00:13:53.571 Device Self-Test: Not Supported 00:13:53.571 Directives: Supported 00:13:53.571 NVMe-MI: Not Supported 00:13:53.571 Virtualization Management: Not Supported 00:13:53.571 Doorbell Buffer Config: Supported 00:13:53.571 Get LBA Status Capability: Not Supported 00:13:53.571 Command & Feature Lockdown Capability: Not Supported 00:13:53.571 Abort Command Limit: 4 00:13:53.571 Async Event Request Limit: 4 00:13:53.571 Number of Firmware Slots: N/A 00:13:53.571 Firmware Slot 1 Read-Only: N/A 00:13:53.571 Firmware Activation Without Reset: N/A 00:13:53.571 Multiple Update Detection Support: N/A 00:13:53.571 Firmware Update Granularity: No Information Provided 00:13:53.571 Per-Namespace SMART Log: Yes 00:13:53.571 Asymmetric Namespace Access Log Page: Not Supported 00:13:53.571 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:53.571 Command Effects Log Page: Supported 00:13:53.571 Get Log Page Extended Data: Supported 00:13:53.571 Telemetry Log Pages: Not Supported 00:13:53.571 Persistent Event Log Pages: Not Supported 00:13:53.571 Supported Log Pages Log Page: May Support 00:13:53.571 Commands Supported & Effects Log Page: Not Supported 00:13:53.571 Feature Identifiers & Effects Log Page:May Support 00:13:53.571 NVMe-MI Commands & Effects Log Page: May Support 00:13:53.571 Data Area 4 for Telemetry Log: Not Supported 00:13:53.571 Error Log Page Entries Supported: 1 00:13:53.571 Keep Alive: Not Supported 00:13:53.571 00:13:53.571 NVM Command Set Attributes 00:13:53.571 ========================== 00:13:53.571 Submission Queue Entry Size 00:13:53.571 Max: 64 00:13:53.571 Min: 64 00:13:53.571 Completion Queue Entry Size 00:13:53.571 Max: 16 00:13:53.571 Min: 16 00:13:53.571 Number of Namespaces: 256 00:13:53.571 Compare Command: Supported 00:13:53.571 Write Uncorrectable Command: Not Supported 00:13:53.571 Dataset Management Command: Supported 00:13:53.571 Write Zeroes Command: Supported 00:13:53.571 Set Features Save Field: Supported 00:13:53.571 Reservations: Not Supported 00:13:53.571 Timestamp: Supported 00:13:53.571 Copy: Supported 00:13:53.571 Volatile Write Cache: Present 00:13:53.572 Atomic Write Unit (Normal): 1 00:13:53.572 Atomic Write Unit (PFail): 1 00:13:53.572 Atomic Compare & Write Unit: 1 00:13:53.572 Fused Compare & Write: Not Supported 00:13:53.572 Scatter-Gather List 00:13:53.572 SGL Command Set: Supported 00:13:53.572 SGL Keyed: Not Supported 00:13:53.572 SGL Bit Bucket Descriptor: Not Supported 00:13:53.572 SGL Metadata Pointer: Not Supported 00:13:53.572 Oversized SGL: Not Supported 00:13:53.572 SGL Metadata Address: Not Supported 00:13:53.572 SGL Offset: Not Supported 00:13:53.572 Transport SGL Data Block: Not Supported 00:13:53.572 Replay Protected Memory Block: Not Supported 00:13:53.572 00:13:53.572 Firmware Slot Information 00:13:53.572 ========================= 00:13:53.572 Active slot: 1 00:13:53.572 Slot 1 Firmware Revision: 1.0 00:13:53.572 00:13:53.572 00:13:53.572 Commands Supported and Effects 00:13:53.572 ============================== 00:13:53.572 Admin Commands 00:13:53.572 -------------- 00:13:53.572 Delete I/O Submission Queue (00h): Supported 00:13:53.572 Create I/O Submission Queue (01h): Supported 00:13:53.572 Get Log Page (02h): Supported 00:13:53.572 Delete I/O Completion Queue (04h): Supported 00:13:53.572 Create I/O Completion Queue (05h): Supported 00:13:53.572 Identify (06h): Supported 00:13:53.572 Abort (08h): Supported 00:13:53.572 Set Features (09h): Supported 00:13:53.572 Get Features (0Ah): Supported 00:13:53.572 Asynchronous Event Request (0Ch): Supported 00:13:53.572 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:53.572 Directive Send (19h): Supported 00:13:53.572 Directive Receive (1Ah): Supported 00:13:53.572 Virtualization Management (1Ch): Supported 00:13:53.572 Doorbell Buffer Config (7Ch): Supported 00:13:53.572 Format NVM (80h): Supported LBA-Change 00:13:53.572 I/O Commands 00:13:53.572 ------------ 00:13:53.572 Flush (00h): Supported LBA-Change 00:13:53.572 Write (01h): Supported LBA-Change 00:13:53.572 Read (02h): Supported 00:13:53.572 Compare (05h): Supported 00:13:53.572 Write Zeroes (08h): Supported LBA-Change 00:13:53.572 Dataset Management (09h): Supported LBA-Change 00:13:53.572 Unknown (0Ch): Supported 00:13:53.572 Unknown (12h): Supported 00:13:53.572 Copy (19h): Supported LBA-Change 00:13:53.572 Unknown (1Dh): Supported LBA-Change 00:13:53.572 00:13:53.572 Error Log 00:13:53.572 ========= 00:13:53.572 00:13:53.572 Arbitration 00:13:53.572 =========== 00:13:53.572 Arbitration Burst: no limit 00:13:53.572 00:13:53.572 Power Management 00:13:53.572 ================ 00:13:53.572 Number of Power States: 1 00:13:53.572 Current Power State: Power State #0 00:13:53.572 Power State #0: 00:13:53.572 Max Power: 25.00 W 00:13:53.572 Non-Operational State: Operational 00:13:53.572 Entry Latency: 16 microseconds 00:13:53.572 Exit Latency: 4 microseconds 00:13:53.572 Relative Read Throughput: 0 00:13:53.572 Relative Read Latency: 0 00:13:53.572 Relative Write Throughput: 0 00:13:53.572 Relative Write Latency: 0 00:13:53.572 Idle Power: Not Reported 00:13:53.572 Active Power: Not Reported 00:13:53.572 Non-Operational Permissive Mode: Not Supported 00:13:53.572 00:13:53.572 Health Information 00:13:53.572 ================== 00:13:53.572 Critical Warnings: 00:13:53.572 Available Spare Space: OK 00:13:53.572 Temperature: OK 00:13:53.572 Device Reliability: OK 00:13:53.572 Read Only: No 00:13:53.572 Volatile Memory Backup: OK 00:13:53.572 Current Temperature: 323 Kelvin (50 Celsius) 00:13:53.572 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:53.572 Available Spare: 0% 00:13:53.572 Available Spare Threshold: 0% 00:13:53.572 Life Percentage Used: 0% 00:13:53.572 Data Units Read: 777 00:13:53.572 Data Units Written: 706 00:13:53.572 Host Read Commands: 33984 00:13:53.572 Host Write Commands: 33407 00:13:53.572 Controller Busy Time: 0 minutes 00:13:53.572 Power Cycles: 0 00:13:53.572 Power On Hours: 0 hours 00:13:53.572 Unsafe Shutdowns: 0 00:13:53.572 Unrecoverable Media Errors: 0 00:13:53.572 Lifetime Error Log Entries: 0 00:13:53.572 Warning Temperature Time: 0 minutes 00:13:53.572 Critical Temperature Time: 0 minutes 00:13:53.572 00:13:53.572 Number of Queues 00:13:53.572 ================ 00:13:53.572 Number of I/O Submission Queues: 64 00:13:53.572 Number of I/O Completion Queues: 64 00:13:53.572 00:13:53.572 ZNS Specific Controller Data 00:13:53.572 ============================ 00:13:53.572 Zone Append Size Limit: 0 00:13:53.572 00:13:53.572 00:13:53.572 Active Namespaces 00:13:53.572 ================= 00:13:53.572 Namespace ID:1 00:13:53.572 Error Recovery Timeout: Unlimited 00:13:53.572 Command Set Identifier: NVM (00h) 00:13:53.572 Deallocate: Supported 00:13:53.572 Deallocated/Unwritten Error: Supported 00:13:53.572 Deallocated Read Value: All 0x00 00:13:53.572 Deallocate in Write Zeroes: Not Supported 00:13:53.572 Deallocated Guard Field: 0xFFFF 00:13:53.572 Flush: Supported 00:13:53.572 Reservation: Not Supported 00:13:53.572 Namespace Sharing Capabilities: Multiple Controllers 00:13:53.572 Size (in LBAs): 262144 (1GiB) 00:13:53.572 Capacity (in LBAs): 262144 (1GiB) 00:13:53.572 Utilization (in LBAs): 262144 (1GiB) 00:13:53.572 Thin Provisioning: Not Supported 00:13:53.572 Per-NS Atomic Units: No 00:13:53.572 Maximum Single Source Range Length: 128 00:13:53.572 Maximum Copy Length: 128 00:13:53.572 Maximum Source Range Count: 128 00:13:53.572 NGUID/EUI64 Never Reused: No 00:13:53.572 Namespace Write Protected: No 00:13:53.572 Endurance group ID: 1 00:13:53.572 Number of LBA Formats: 8 00:13:53.572 Current LBA Format: LBA Format #04 00:13:53.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:53.572 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:53.572 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:53.572 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:53.572 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:53.572 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:53.572 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:53.572 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:53.572 00:13:53.572 Get Feature FDP: 00:13:53.572 ================ 00:13:53.572 Enabled: Yes 00:13:53.572 FDP configuration index: 0 00:13:53.572 00:13:53.572 FDP configurations log page 00:13:53.572 =========================== 00:13:53.572 Number of FDP configurations: 1 00:13:53.572 Version: 0 00:13:53.572 Size: 112 00:13:53.572 FDP Configuration Descriptor: 0 00:13:53.572 Descriptor Size: 96 00:13:53.572 Reclaim Group Identifier format: 2 00:13:53.572 FDP Volatile Write Cache: Not Present 00:13:53.572 FDP Configuration: Valid 00:13:53.572 Vendor Specific Size: 0 00:13:53.572 Number of Reclaim Groups: 2 00:13:53.572 Number of Recalim Unit Handles: 8 00:13:53.572 Max Placement Identifiers: 128 00:13:53.572 Number of Namespaces Suppprted: 256 00:13:53.572 Reclaim unit Nominal Size: 6000000 bytes 00:13:53.572 Estimated Reclaim Unit Time Limit: Not Reported 00:13:53.572 RUH Desc #000: RUH Type: Initially Isolated 00:13:53.572 RUH Desc #001: RUH Type: Initially Isolated 00:13:53.572 RUH Desc #002: RUH Type: Initially Isolated 00:13:53.572 RUH Desc #003: RUH Type: Initially Isolated 00:13:53.572 RUH Desc #004: RUH Type: Initially Isolated 00:13:53.572 RUH Desc #005: RUH Type: Initially Isolated 00:13:53.572 RUH Desc #006: RUH Type: Initially Isolated 00:13:53.572 RUH Desc #007: RUH Type: Initially Isolated 00:13:53.572 00:13:53.572 FDP reclaim unit handle usage log page 00:13:53.572 ====================================== 00:13:53.572 Number of Reclaim Unit Handles: 8 00:13:53.572 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:53.572 RUH Usage Desc #001: RUH Attributes: Unused 00:13:53.572 RUH Usage Desc #002: RUH Attributes: Unused 00:13:53.572 RUH Usage Desc #003: RUH Attributes: Unused 00:13:53.572 RUH Usage Desc #004: RUH Attributes: Unused 00:13:53.572 RUH Usage Desc #005: RUH Attributes: Unused 00:13:53.572 RUH Usage Desc #006: RUH Attributes: Unused 00:13:53.572 RUH Usage Desc #007: RUH Attributes: Unused 00:13:53.572 00:13:53.572 FDP statistics log page 00:13:53.572 ======================= 00:13:53.572 Host bytes with metadata written: 446210048 00:13:53.572 Media bytes with metadata written: 446275584 00:13:53.572 Media bytes erased: 0 00:13:53.572 00:13:53.572 FDP events log page 00:13:53.572 =================== 00:13:53.572 Number of FDP events: 0 00:13:53.572 00:13:53.572 NVM Specific Namespace Data 00:13:53.572 =========================== 00:13:53.572 Logical Block Storage Tag Mask: 0 00:13:53.572 Protection Information Capabilities: 00:13:53.572 16b Guard Protection Information Storage Tag Support: No 00:13:53.572 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:53.572 Storage Tag Check Read Support: No 00:13:53.572 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:53.572 ************************************ 00:13:53.572 END TEST nvme_identify 00:13:53.572 ************************************ 00:13:53.572 00:13:53.572 real 0m1.892s 00:13:53.572 user 0m0.793s 00:13:53.572 sys 0m0.865s 00:13:53.572 13:08:40 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.572 13:08:40 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:13:53.572 13:08:40 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:13:53.572 13:08:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:53.572 13:08:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.572 13:08:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.572 ************************************ 00:13:53.572 START TEST nvme_perf 00:13:53.572 ************************************ 00:13:53.572 13:08:40 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:13:53.572 13:08:40 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:13:54.944 Initializing NVMe Controllers 00:13:54.944 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:54.944 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:54.944 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:54.944 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:54.944 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:54.944 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:54.944 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:54.944 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:54.944 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:54.944 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:54.944 Initialization complete. Launching workers. 00:13:54.944 ======================================================== 00:13:54.944 Latency(us) 00:13:54.944 Device Information : IOPS MiB/s Average min max 00:13:54.944 PCIE (0000:00:10.0) NSID 1 from core 0: 12728.01 149.16 10075.27 7628.57 40821.83 00:13:54.944 PCIE (0000:00:11.0) NSID 1 from core 0: 12728.01 149.16 10051.42 7684.66 37903.31 00:13:54.944 PCIE (0000:00:13.0) NSID 1 from core 0: 12728.01 149.16 10025.36 7744.89 35802.28 00:13:54.944 PCIE (0000:00:12.0) NSID 1 from core 0: 12728.01 149.16 9998.53 7806.90 33014.25 00:13:54.944 PCIE (0000:00:12.0) NSID 2 from core 0: 12728.01 149.16 9970.61 7801.26 30310.20 00:13:54.944 PCIE (0000:00:12.0) NSID 3 from core 0: 12728.01 149.16 9943.72 7797.54 27265.89 00:13:54.944 ======================================================== 00:13:54.944 Total : 76368.04 894.94 10010.82 7628.57 40821.83 00:13:54.944 00:13:54.944 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:54.944 ================================================================================= 00:13:54.944 1.00000% : 7983.476us 00:13:54.944 10.00000% : 8340.945us 00:13:54.944 25.00000% : 8757.993us 00:13:54.944 50.00000% : 9353.775us 00:13:54.944 75.00000% : 10247.447us 00:13:54.944 90.00000% : 12868.887us 00:13:54.944 95.00000% : 13643.404us 00:13:54.944 98.00000% : 15073.280us 00:13:54.944 99.00000% : 28954.996us 00:13:54.944 99.50000% : 37891.724us 00:13:54.944 99.90000% : 40274.851us 00:13:54.944 99.99000% : 40751.476us 00:13:54.944 99.99900% : 40989.789us 00:13:54.944 99.99990% : 40989.789us 00:13:54.944 99.99999% : 40989.789us 00:13:54.944 00:13:54.944 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:54.944 ================================================================================= 00:13:54.944 1.00000% : 8043.055us 00:13:54.944 10.00000% : 8400.524us 00:13:54.944 25.00000% : 8757.993us 00:13:54.944 50.00000% : 9353.775us 00:13:54.944 75.00000% : 10247.447us 00:13:54.944 90.00000% : 12868.887us 00:13:54.944 95.00000% : 13464.669us 00:13:54.944 98.00000% : 15073.280us 00:13:54.944 99.00000% : 27048.495us 00:13:54.944 99.50000% : 35508.596us 00:13:54.944 99.90000% : 37653.411us 00:13:54.944 99.99000% : 37891.724us 00:13:54.944 99.99900% : 38130.036us 00:13:54.944 99.99990% : 38130.036us 00:13:54.944 99.99999% : 38130.036us 00:13:54.944 00:13:54.944 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:54.944 ================================================================================= 00:13:54.944 1.00000% : 8102.633us 00:13:54.944 10.00000% : 8400.524us 00:13:54.944 25.00000% : 8757.993us 00:13:54.944 50.00000% : 9353.775us 00:13:54.944 75.00000% : 10247.447us 00:13:54.944 90.00000% : 12809.309us 00:13:54.944 95.00000% : 13464.669us 00:13:54.944 98.00000% : 14894.545us 00:13:54.944 99.00000% : 24784.524us 00:13:54.944 99.50000% : 33125.469us 00:13:54.944 99.90000% : 35508.596us 00:13:54.944 99.99000% : 35985.222us 00:13:54.944 99.99900% : 35985.222us 00:13:54.944 99.99990% : 35985.222us 00:13:54.944 99.99999% : 35985.222us 00:13:54.944 00:13:54.944 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:54.944 ================================================================================= 00:13:54.944 1.00000% : 8102.633us 00:13:54.944 10.00000% : 8400.524us 00:13:54.944 25.00000% : 8757.993us 00:13:54.944 50.00000% : 9353.775us 00:13:54.944 75.00000% : 10247.447us 00:13:54.944 90.00000% : 12868.887us 00:13:54.944 95.00000% : 13464.669us 00:13:54.944 98.00000% : 14834.967us 00:13:54.944 99.00000% : 22043.927us 00:13:54.944 99.50000% : 30384.873us 00:13:54.944 99.90000% : 32648.844us 00:13:54.944 99.99000% : 33125.469us 00:13:54.944 99.99900% : 33125.469us 00:13:54.944 99.99990% : 33125.469us 00:13:54.944 99.99999% : 33125.469us 00:13:54.944 00:13:54.944 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:54.944 ================================================================================= 00:13:54.944 1.00000% : 8102.633us 00:13:54.944 10.00000% : 8400.524us 00:13:54.944 25.00000% : 8757.993us 00:13:54.944 50.00000% : 9353.775us 00:13:54.944 75.00000% : 10247.447us 00:13:54.944 90.00000% : 12809.309us 00:13:54.944 95.00000% : 13524.247us 00:13:54.944 98.00000% : 14894.545us 00:13:54.944 99.00000% : 19065.018us 00:13:54.944 99.50000% : 27405.964us 00:13:54.944 99.90000% : 29789.091us 00:13:54.944 99.99000% : 30384.873us 00:13:54.944 99.99900% : 30384.873us 00:13:54.944 99.99990% : 30384.873us 00:13:54.944 99.99999% : 30384.873us 00:13:54.944 00:13:54.944 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:54.944 ================================================================================= 00:13:54.944 1.00000% : 8102.633us 00:13:54.944 10.00000% : 8400.524us 00:13:54.944 25.00000% : 8757.993us 00:13:54.944 50.00000% : 9353.775us 00:13:54.944 75.00000% : 10247.447us 00:13:54.944 90.00000% : 12868.887us 00:13:54.944 95.00000% : 13464.669us 00:13:54.944 98.00000% : 15013.702us 00:13:54.944 99.00000% : 16443.578us 00:13:54.944 99.50000% : 24665.367us 00:13:54.945 99.90000% : 26810.182us 00:13:54.945 99.99000% : 27286.807us 00:13:54.945 99.99900% : 27286.807us 00:13:54.945 99.99990% : 27286.807us 00:13:54.945 99.99999% : 27286.807us 00:13:54.945 00:13:54.945 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:54.945 ============================================================================== 00:13:54.945 Range in us Cumulative IO count 00:13:54.945 7626.007 - 7685.585: 0.0236% ( 3) 00:13:54.945 7685.585 - 7745.164: 0.0864% ( 8) 00:13:54.945 7745.164 - 7804.742: 0.1492% ( 8) 00:13:54.945 7804.742 - 7864.320: 0.3141% ( 21) 00:13:54.945 7864.320 - 7923.898: 0.6360% ( 41) 00:13:54.945 7923.898 - 7983.476: 1.2641% ( 80) 00:13:54.945 7983.476 - 8043.055: 2.3948% ( 144) 00:13:54.945 8043.055 - 8102.633: 3.7924% ( 178) 00:13:54.945 8102.633 - 8162.211: 5.4256% ( 208) 00:13:54.945 8162.211 - 8221.789: 7.1372% ( 218) 00:13:54.945 8221.789 - 8281.367: 9.1473% ( 256) 00:13:54.945 8281.367 - 8340.945: 11.1416% ( 254) 00:13:54.945 8340.945 - 8400.524: 13.3480% ( 281) 00:13:54.945 8400.524 - 8460.102: 15.4680% ( 270) 00:13:54.945 8460.102 - 8519.680: 17.6822% ( 282) 00:13:54.945 8519.680 - 8579.258: 19.9906% ( 294) 00:13:54.945 8579.258 - 8638.836: 22.3383% ( 299) 00:13:54.945 8638.836 - 8698.415: 24.5917% ( 287) 00:13:54.945 8698.415 - 8757.993: 26.9237% ( 297) 00:13:54.945 8757.993 - 8817.571: 29.3263% ( 306) 00:13:54.945 8817.571 - 8877.149: 31.7604% ( 310) 00:13:54.945 8877.149 - 8936.727: 34.2808% ( 321) 00:13:54.945 8936.727 - 8996.305: 36.8797% ( 331) 00:13:54.945 8996.305 - 9055.884: 39.4394% ( 326) 00:13:54.945 9055.884 - 9115.462: 41.9127% ( 315) 00:13:54.945 9115.462 - 9175.040: 44.2447% ( 297) 00:13:54.945 9175.040 - 9234.618: 46.6002% ( 300) 00:13:54.945 9234.618 - 9294.196: 48.7673% ( 276) 00:13:54.945 9294.196 - 9353.775: 50.8872% ( 270) 00:13:54.945 9353.775 - 9413.353: 52.9052% ( 257) 00:13:54.945 9413.353 - 9472.931: 54.9545% ( 261) 00:13:54.945 9472.931 - 9532.509: 56.9331% ( 252) 00:13:54.945 9532.509 - 9592.087: 58.9196% ( 253) 00:13:54.945 9592.087 - 9651.665: 60.6391% ( 219) 00:13:54.945 9651.665 - 9711.244: 62.3273% ( 215) 00:13:54.945 9711.244 - 9770.822: 63.9997% ( 213) 00:13:54.945 9770.822 - 9830.400: 65.6171% ( 206) 00:13:54.945 9830.400 - 9889.978: 67.2111% ( 203) 00:13:54.945 9889.978 - 9949.556: 68.7579% ( 197) 00:13:54.945 9949.556 - 10009.135: 70.1869% ( 182) 00:13:54.945 10009.135 - 10068.713: 71.6237% ( 183) 00:13:54.945 10068.713 - 10128.291: 72.9271% ( 166) 00:13:54.945 10128.291 - 10187.869: 74.2070% ( 163) 00:13:54.945 10187.869 - 10247.447: 75.4083% ( 153) 00:13:54.945 10247.447 - 10307.025: 76.3427% ( 119) 00:13:54.945 10307.025 - 10366.604: 77.1828% ( 107) 00:13:54.945 10366.604 - 10426.182: 77.9444% ( 97) 00:13:54.945 10426.182 - 10485.760: 78.4783% ( 68) 00:13:54.945 10485.760 - 10545.338: 79.0908% ( 78) 00:13:54.945 10545.338 - 10604.916: 79.5619% ( 60) 00:13:54.945 10604.916 - 10664.495: 80.0094% ( 57) 00:13:54.945 10664.495 - 10724.073: 80.4334% ( 54) 00:13:54.945 10724.073 - 10783.651: 80.8888% ( 58) 00:13:54.945 10783.651 - 10843.229: 81.2893% ( 51) 00:13:54.945 10843.229 - 10902.807: 81.5876% ( 38) 00:13:54.945 10902.807 - 10962.385: 81.8546% ( 34) 00:13:54.945 10962.385 - 11021.964: 82.0509% ( 25) 00:13:54.945 11021.964 - 11081.542: 82.2158% ( 21) 00:13:54.945 11081.542 - 11141.120: 82.3571% ( 18) 00:13:54.945 11141.120 - 11200.698: 82.4749% ( 15) 00:13:54.945 11200.698 - 11260.276: 82.5769% ( 13) 00:13:54.945 11260.276 - 11319.855: 82.7340% ( 20) 00:13:54.945 11319.855 - 11379.433: 82.8046% ( 9) 00:13:54.945 11379.433 - 11439.011: 82.9067% ( 13) 00:13:54.945 11439.011 - 11498.589: 82.9931% ( 11) 00:13:54.945 11498.589 - 11558.167: 83.0638% ( 9) 00:13:54.945 11558.167 - 11617.745: 83.1187% ( 7) 00:13:54.945 11617.745 - 11677.324: 83.2286% ( 14) 00:13:54.945 11677.324 - 11736.902: 83.3386% ( 14) 00:13:54.945 11736.902 - 11796.480: 83.4720% ( 17) 00:13:54.945 11796.480 - 11856.058: 83.7547% ( 36) 00:13:54.945 11856.058 - 11915.636: 84.0452% ( 37) 00:13:54.945 11915.636 - 11975.215: 84.4300% ( 49) 00:13:54.945 11975.215 - 12034.793: 84.7833% ( 45) 00:13:54.945 12034.793 - 12094.371: 85.1366% ( 45) 00:13:54.945 12094.371 - 12153.949: 85.4742% ( 43) 00:13:54.945 12153.949 - 12213.527: 85.7805% ( 39) 00:13:54.945 12213.527 - 12273.105: 86.1338% ( 45) 00:13:54.945 12273.105 - 12332.684: 86.4400% ( 39) 00:13:54.945 12332.684 - 12392.262: 86.8012% ( 46) 00:13:54.945 12392.262 - 12451.840: 87.1624% ( 46) 00:13:54.945 12451.840 - 12511.418: 87.6178% ( 58) 00:13:54.945 12511.418 - 12570.996: 88.0025% ( 49) 00:13:54.945 12570.996 - 12630.575: 88.4265% ( 54) 00:13:54.945 12630.575 - 12690.153: 88.7563% ( 42) 00:13:54.945 12690.153 - 12749.731: 89.3059% ( 70) 00:13:54.945 12749.731 - 12809.309: 89.6671% ( 46) 00:13:54.945 12809.309 - 12868.887: 90.0911% ( 54) 00:13:54.945 12868.887 - 12928.465: 90.4837% ( 50) 00:13:54.945 12928.465 - 12988.044: 90.9077% ( 54) 00:13:54.945 12988.044 - 13047.622: 91.3238% ( 53) 00:13:54.945 13047.622 - 13107.200: 91.7085% ( 49) 00:13:54.945 13107.200 - 13166.778: 92.1168% ( 52) 00:13:54.945 13166.778 - 13226.356: 92.5094% ( 50) 00:13:54.945 13226.356 - 13285.935: 92.9099% ( 51) 00:13:54.945 13285.935 - 13345.513: 93.2632% ( 45) 00:13:54.945 13345.513 - 13405.091: 93.6401% ( 48) 00:13:54.945 13405.091 - 13464.669: 94.0248% ( 49) 00:13:54.945 13464.669 - 13524.247: 94.3938% ( 47) 00:13:54.945 13524.247 - 13583.825: 94.7707% ( 48) 00:13:54.945 13583.825 - 13643.404: 95.1869% ( 53) 00:13:54.945 13643.404 - 13702.982: 95.5009% ( 40) 00:13:54.945 13702.982 - 13762.560: 95.9642% ( 59) 00:13:54.945 13762.560 - 13822.138: 96.3411% ( 48) 00:13:54.945 13822.138 - 13881.716: 96.5688% ( 29) 00:13:54.945 13881.716 - 13941.295: 96.7337% ( 21) 00:13:54.945 13941.295 - 14000.873: 96.9143% ( 23) 00:13:54.945 14000.873 - 14060.451: 97.0556% ( 18) 00:13:54.945 14060.451 - 14120.029: 97.1027% ( 6) 00:13:54.945 14120.029 - 14179.607: 97.2205% ( 15) 00:13:54.945 14179.607 - 14239.185: 97.3147% ( 12) 00:13:54.945 14239.185 - 14298.764: 97.4089% ( 12) 00:13:54.945 14298.764 - 14358.342: 97.5031% ( 12) 00:13:54.945 14358.342 - 14417.920: 97.5581% ( 7) 00:13:54.945 14417.920 - 14477.498: 97.5974% ( 5) 00:13:54.945 14477.498 - 14537.076: 97.6445% ( 6) 00:13:54.945 14537.076 - 14596.655: 97.6916% ( 6) 00:13:54.945 14596.655 - 14656.233: 97.7308% ( 5) 00:13:54.945 14656.233 - 14715.811: 97.7701% ( 5) 00:13:54.945 14715.811 - 14775.389: 97.8172% ( 6) 00:13:54.945 14775.389 - 14834.967: 97.8565% ( 5) 00:13:54.945 14834.967 - 14894.545: 97.9036% ( 6) 00:13:54.945 14894.545 - 14954.124: 97.9350% ( 4) 00:13:54.945 14954.124 - 15013.702: 97.9664% ( 4) 00:13:54.945 15013.702 - 15073.280: 98.0214% ( 7) 00:13:54.945 15073.280 - 15132.858: 98.0528% ( 4) 00:13:54.945 15132.858 - 15192.436: 98.1234% ( 9) 00:13:54.945 15192.436 - 15252.015: 98.1784% ( 7) 00:13:54.945 15252.015 - 15371.171: 98.3040% ( 16) 00:13:54.945 15371.171 - 15490.327: 98.4532% ( 19) 00:13:54.945 15490.327 - 15609.484: 98.5631% ( 14) 00:13:54.945 15609.484 - 15728.640: 98.6731% ( 14) 00:13:54.945 15728.640 - 15847.796: 98.7673% ( 12) 00:13:54.945 15847.796 - 15966.953: 98.8065% ( 5) 00:13:54.945 15966.953 - 16086.109: 98.8536% ( 6) 00:13:54.945 16086.109 - 16205.265: 98.8929% ( 5) 00:13:54.945 16205.265 - 16324.422: 98.9400% ( 6) 00:13:54.945 16324.422 - 16443.578: 98.9950% ( 7) 00:13:54.945 28835.840 - 28954.996: 99.0028% ( 1) 00:13:54.945 28954.996 - 29074.153: 99.0107% ( 1) 00:13:54.945 29074.153 - 29193.309: 99.0421% ( 4) 00:13:54.945 29193.309 - 29312.465: 99.0578% ( 2) 00:13:54.945 29312.465 - 29431.622: 99.0735% ( 2) 00:13:54.945 29431.622 - 29550.778: 99.1049% ( 4) 00:13:54.945 29550.778 - 29669.935: 99.1128% ( 1) 00:13:54.945 29669.935 - 29789.091: 99.1442% ( 4) 00:13:54.945 29789.091 - 29908.247: 99.1599% ( 2) 00:13:54.945 29908.247 - 30027.404: 99.1834% ( 3) 00:13:54.945 30027.404 - 30146.560: 99.1991% ( 2) 00:13:54.946 30146.560 - 30265.716: 99.2227% ( 3) 00:13:54.946 30265.716 - 30384.873: 99.2305% ( 1) 00:13:54.946 30384.873 - 30504.029: 99.2619% ( 4) 00:13:54.946 30504.029 - 30742.342: 99.3012% ( 5) 00:13:54.946 30742.342 - 30980.655: 99.3405% ( 5) 00:13:54.946 30980.655 - 31218.967: 99.3876% ( 6) 00:13:54.946 31218.967 - 31457.280: 99.4268% ( 5) 00:13:54.946 31457.280 - 31695.593: 99.4739% ( 6) 00:13:54.946 31695.593 - 31933.905: 99.4975% ( 3) 00:13:54.946 37653.411 - 37891.724: 99.5053% ( 1) 00:13:54.946 37891.724 - 38130.036: 99.5446% ( 5) 00:13:54.946 38130.036 - 38368.349: 99.5917% ( 6) 00:13:54.946 38368.349 - 38606.662: 99.6310% ( 5) 00:13:54.946 38606.662 - 38844.975: 99.6702% ( 5) 00:13:54.946 38844.975 - 39083.287: 99.7095% ( 5) 00:13:54.946 39083.287 - 39321.600: 99.7566% ( 6) 00:13:54.946 39321.600 - 39559.913: 99.7959% ( 5) 00:13:54.946 39559.913 - 39798.225: 99.8351% ( 5) 00:13:54.946 39798.225 - 40036.538: 99.8822% ( 6) 00:13:54.946 40036.538 - 40274.851: 99.9058% ( 3) 00:13:54.946 40274.851 - 40513.164: 99.9450% ( 5) 00:13:54.946 40513.164 - 40751.476: 99.9921% ( 6) 00:13:54.946 40751.476 - 40989.789: 100.0000% ( 1) 00:13:54.946 00:13:54.946 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:54.946 ============================================================================== 00:13:54.946 Range in us Cumulative IO count 00:13:54.946 7626.007 - 7685.585: 0.0079% ( 1) 00:13:54.946 7685.585 - 7745.164: 0.0550% ( 6) 00:13:54.946 7745.164 - 7804.742: 0.1178% ( 8) 00:13:54.946 7804.742 - 7864.320: 0.1727% ( 7) 00:13:54.946 7864.320 - 7923.898: 0.3062% ( 17) 00:13:54.946 7923.898 - 7983.476: 0.4947% ( 24) 00:13:54.946 7983.476 - 8043.055: 1.0207% ( 67) 00:13:54.946 8043.055 - 8102.633: 1.9315% ( 116) 00:13:54.946 8102.633 - 8162.211: 3.3527% ( 181) 00:13:54.946 8162.211 - 8221.789: 4.9859% ( 208) 00:13:54.946 8221.789 - 8281.367: 6.9488% ( 250) 00:13:54.946 8281.367 - 8340.945: 9.1316% ( 278) 00:13:54.946 8340.945 - 8400.524: 11.6128% ( 316) 00:13:54.946 8400.524 - 8460.102: 14.0861% ( 315) 00:13:54.946 8460.102 - 8519.680: 16.7085% ( 334) 00:13:54.946 8519.680 - 8579.258: 19.3389% ( 335) 00:13:54.946 8579.258 - 8638.836: 22.0477% ( 345) 00:13:54.946 8638.836 - 8698.415: 24.7330% ( 342) 00:13:54.946 8698.415 - 8757.993: 27.3869% ( 338) 00:13:54.946 8757.993 - 8817.571: 30.2057% ( 359) 00:13:54.946 8817.571 - 8877.149: 32.9146% ( 345) 00:13:54.946 8877.149 - 8936.727: 35.7334% ( 359) 00:13:54.946 8936.727 - 8996.305: 38.3087% ( 328) 00:13:54.946 8996.305 - 9055.884: 40.6957% ( 304) 00:13:54.946 9055.884 - 9115.462: 42.7842% ( 266) 00:13:54.946 9115.462 - 9175.040: 44.8571% ( 264) 00:13:54.946 9175.040 - 9234.618: 46.8200% ( 250) 00:13:54.946 9234.618 - 9294.196: 48.8929% ( 264) 00:13:54.946 9294.196 - 9353.775: 50.8794% ( 253) 00:13:54.946 9353.775 - 9413.353: 52.9209% ( 260) 00:13:54.946 9413.353 - 9472.931: 54.8602% ( 247) 00:13:54.946 9472.931 - 9532.509: 56.7525% ( 241) 00:13:54.946 9532.509 - 9592.087: 58.6762% ( 245) 00:13:54.946 9592.087 - 9651.665: 60.5606% ( 240) 00:13:54.946 9651.665 - 9711.244: 62.4293% ( 238) 00:13:54.946 9711.244 - 9770.822: 64.2431% ( 231) 00:13:54.946 9770.822 - 9830.400: 66.0254% ( 227) 00:13:54.946 9830.400 - 9889.978: 67.8156% ( 228) 00:13:54.946 9889.978 - 9949.556: 69.5509% ( 221) 00:13:54.946 9949.556 - 10009.135: 71.1369% ( 202) 00:13:54.946 10009.135 - 10068.713: 72.5660% ( 182) 00:13:54.946 10068.713 - 10128.291: 73.8536% ( 164) 00:13:54.946 10128.291 - 10187.869: 74.9607% ( 141) 00:13:54.946 10187.869 - 10247.447: 75.8951% ( 119) 00:13:54.946 10247.447 - 10307.025: 76.7117% ( 104) 00:13:54.946 10307.025 - 10366.604: 77.4419% ( 93) 00:13:54.946 10366.604 - 10426.182: 78.0857% ( 82) 00:13:54.946 10426.182 - 10485.760: 78.6275% ( 69) 00:13:54.946 10485.760 - 10545.338: 79.1457% ( 66) 00:13:54.946 10545.338 - 10604.916: 79.7425% ( 76) 00:13:54.946 10604.916 - 10664.495: 80.2450% ( 64) 00:13:54.946 10664.495 - 10724.073: 80.7318% ( 62) 00:13:54.946 10724.073 - 10783.651: 81.1479% ( 53) 00:13:54.946 10783.651 - 10843.229: 81.4227% ( 35) 00:13:54.946 10843.229 - 10902.807: 81.6504% ( 29) 00:13:54.946 10902.807 - 10962.385: 81.8467% ( 25) 00:13:54.946 10962.385 - 11021.964: 82.0273% ( 23) 00:13:54.946 11021.964 - 11081.542: 82.1451% ( 15) 00:13:54.946 11081.542 - 11141.120: 82.2629% ( 15) 00:13:54.946 11141.120 - 11200.698: 82.3649% ( 13) 00:13:54.946 11200.698 - 11260.276: 82.4670% ( 13) 00:13:54.946 11260.276 - 11319.855: 82.5455% ( 10) 00:13:54.946 11319.855 - 11379.433: 82.6241% ( 10) 00:13:54.946 11379.433 - 11439.011: 82.7026% ( 10) 00:13:54.946 11439.011 - 11498.589: 82.7968% ( 12) 00:13:54.946 11498.589 - 11558.167: 82.8910% ( 12) 00:13:54.946 11558.167 - 11617.745: 82.9617% ( 9) 00:13:54.946 11617.745 - 11677.324: 83.0402% ( 10) 00:13:54.946 11677.324 - 11736.902: 83.1109% ( 9) 00:13:54.946 11736.902 - 11796.480: 83.1972% ( 11) 00:13:54.946 11796.480 - 11856.058: 83.2601% ( 8) 00:13:54.946 11856.058 - 11915.636: 83.3386% ( 10) 00:13:54.946 11915.636 - 11975.215: 83.4563% ( 15) 00:13:54.946 11975.215 - 12034.793: 83.6448% ( 24) 00:13:54.946 12034.793 - 12094.371: 83.9432% ( 38) 00:13:54.946 12094.371 - 12153.949: 84.3514% ( 52) 00:13:54.946 12153.949 - 12213.527: 84.8461% ( 63) 00:13:54.946 12213.527 - 12273.105: 85.3015% ( 58) 00:13:54.946 12273.105 - 12332.684: 85.7726% ( 60) 00:13:54.946 12332.684 - 12392.262: 86.2594% ( 62) 00:13:54.946 12392.262 - 12451.840: 86.7305% ( 60) 00:13:54.946 12451.840 - 12511.418: 87.2173% ( 62) 00:13:54.946 12511.418 - 12570.996: 87.7120% ( 63) 00:13:54.946 12570.996 - 12630.575: 88.1988% ( 62) 00:13:54.946 12630.575 - 12690.153: 88.7170% ( 66) 00:13:54.946 12690.153 - 12749.731: 89.2431% ( 67) 00:13:54.946 12749.731 - 12809.309: 89.7535% ( 65) 00:13:54.946 12809.309 - 12868.887: 90.2481% ( 63) 00:13:54.946 12868.887 - 12928.465: 90.7349% ( 62) 00:13:54.946 12928.465 - 12988.044: 91.2060% ( 60) 00:13:54.946 12988.044 - 13047.622: 91.7164% ( 65) 00:13:54.946 13047.622 - 13107.200: 92.2111% ( 63) 00:13:54.946 13107.200 - 13166.778: 92.7136% ( 64) 00:13:54.946 13166.778 - 13226.356: 93.2239% ( 65) 00:13:54.946 13226.356 - 13285.935: 93.7186% ( 63) 00:13:54.946 13285.935 - 13345.513: 94.1897% ( 60) 00:13:54.946 13345.513 - 13405.091: 94.6530% ( 59) 00:13:54.946 13405.091 - 13464.669: 95.1162% ( 59) 00:13:54.946 13464.669 - 13524.247: 95.5952% ( 61) 00:13:54.946 13524.247 - 13583.825: 96.0113% ( 53) 00:13:54.946 13583.825 - 13643.404: 96.3254% ( 40) 00:13:54.946 13643.404 - 13702.982: 96.5923% ( 34) 00:13:54.946 13702.982 - 13762.560: 96.7415% ( 19) 00:13:54.946 13762.560 - 13822.138: 96.8514% ( 14) 00:13:54.946 13822.138 - 13881.716: 96.9378% ( 11) 00:13:54.946 13881.716 - 13941.295: 97.0242% ( 11) 00:13:54.946 13941.295 - 14000.873: 97.1027% ( 10) 00:13:54.946 14000.873 - 14060.451: 97.2126% ( 14) 00:13:54.946 14060.451 - 14120.029: 97.3304% ( 15) 00:13:54.946 14120.029 - 14179.607: 97.4168% ( 11) 00:13:54.946 14179.607 - 14239.185: 97.5110% ( 12) 00:13:54.946 14239.185 - 14298.764: 97.5738% ( 8) 00:13:54.946 14298.764 - 14358.342: 97.6131% ( 5) 00:13:54.946 14358.342 - 14417.920: 97.6366% ( 3) 00:13:54.946 14417.920 - 14477.498: 97.6602% ( 3) 00:13:54.946 14477.498 - 14537.076: 97.6759% ( 2) 00:13:54.946 14537.076 - 14596.655: 97.6994% ( 3) 00:13:54.946 14596.655 - 14656.233: 97.7230% ( 3) 00:13:54.946 14656.233 - 14715.811: 97.7465% ( 3) 00:13:54.946 14715.811 - 14775.389: 97.7937% ( 6) 00:13:54.946 14775.389 - 14834.967: 97.8408% ( 6) 00:13:54.946 14834.967 - 14894.545: 97.8879% ( 6) 00:13:54.946 14894.545 - 14954.124: 97.9350% ( 6) 00:13:54.946 14954.124 - 15013.702: 97.9821% ( 6) 00:13:54.946 15013.702 - 15073.280: 98.0371% ( 7) 00:13:54.946 15073.280 - 15132.858: 98.0842% ( 6) 00:13:54.947 15132.858 - 15192.436: 98.1313% ( 6) 00:13:54.947 15192.436 - 15252.015: 98.1862% ( 7) 00:13:54.947 15252.015 - 15371.171: 98.2805% ( 12) 00:13:54.947 15371.171 - 15490.327: 98.3825% ( 13) 00:13:54.947 15490.327 - 15609.484: 98.4846% ( 13) 00:13:54.947 15609.484 - 15728.640: 98.5788% ( 12) 00:13:54.947 15728.640 - 15847.796: 98.6809% ( 13) 00:13:54.947 15847.796 - 15966.953: 98.7437% ( 8) 00:13:54.947 15966.953 - 16086.109: 98.7987% ( 7) 00:13:54.947 16086.109 - 16205.265: 98.8536% ( 7) 00:13:54.947 16205.265 - 16324.422: 98.9086% ( 7) 00:13:54.947 16324.422 - 16443.578: 98.9636% ( 7) 00:13:54.947 16443.578 - 16562.735: 98.9950% ( 4) 00:13:54.947 26929.338 - 27048.495: 99.0107% ( 2) 00:13:54.947 27048.495 - 27167.651: 99.0264% ( 2) 00:13:54.947 27167.651 - 27286.807: 99.0499% ( 3) 00:13:54.947 27286.807 - 27405.964: 99.0735% ( 3) 00:13:54.947 27405.964 - 27525.120: 99.0970% ( 3) 00:13:54.947 27525.120 - 27644.276: 99.1206% ( 3) 00:13:54.947 27644.276 - 27763.433: 99.1363% ( 2) 00:13:54.947 27763.433 - 27882.589: 99.1599% ( 3) 00:13:54.947 27882.589 - 28001.745: 99.1834% ( 3) 00:13:54.947 28001.745 - 28120.902: 99.2070% ( 3) 00:13:54.947 28120.902 - 28240.058: 99.2227% ( 2) 00:13:54.947 28240.058 - 28359.215: 99.2462% ( 3) 00:13:54.947 28359.215 - 28478.371: 99.2698% ( 3) 00:13:54.947 28478.371 - 28597.527: 99.2933% ( 3) 00:13:54.947 28597.527 - 28716.684: 99.3169% ( 3) 00:13:54.947 28716.684 - 28835.840: 99.3405% ( 3) 00:13:54.947 28835.840 - 28954.996: 99.3562% ( 2) 00:13:54.947 28954.996 - 29074.153: 99.3797% ( 3) 00:13:54.947 29074.153 - 29193.309: 99.4033% ( 3) 00:13:54.947 29193.309 - 29312.465: 99.4268% ( 3) 00:13:54.947 29312.465 - 29431.622: 99.4504% ( 3) 00:13:54.947 29431.622 - 29550.778: 99.4739% ( 3) 00:13:54.947 29550.778 - 29669.935: 99.4975% ( 3) 00:13:54.947 35270.284 - 35508.596: 99.5367% ( 5) 00:13:54.947 35508.596 - 35746.909: 99.5917% ( 7) 00:13:54.947 35746.909 - 35985.222: 99.6231% ( 4) 00:13:54.947 35985.222 - 36223.535: 99.6702% ( 6) 00:13:54.947 36223.535 - 36461.847: 99.7173% ( 6) 00:13:54.947 36461.847 - 36700.160: 99.7644% ( 6) 00:13:54.947 36700.160 - 36938.473: 99.8116% ( 6) 00:13:54.947 36938.473 - 37176.785: 99.8587% ( 6) 00:13:54.947 37176.785 - 37415.098: 99.8979% ( 5) 00:13:54.947 37415.098 - 37653.411: 99.9450% ( 6) 00:13:54.947 37653.411 - 37891.724: 99.9921% ( 6) 00:13:54.947 37891.724 - 38130.036: 100.0000% ( 1) 00:13:54.947 00:13:54.947 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:54.947 ============================================================================== 00:13:54.947 Range in us Cumulative IO count 00:13:54.947 7685.585 - 7745.164: 0.0079% ( 1) 00:13:54.947 7745.164 - 7804.742: 0.0393% ( 4) 00:13:54.947 7804.742 - 7864.320: 0.1492% ( 14) 00:13:54.947 7864.320 - 7923.898: 0.2670% ( 15) 00:13:54.947 7923.898 - 7983.476: 0.5104% ( 31) 00:13:54.947 7983.476 - 8043.055: 0.9422% ( 55) 00:13:54.947 8043.055 - 8102.633: 1.8766% ( 119) 00:13:54.947 8102.633 - 8162.211: 3.1014% ( 156) 00:13:54.947 8162.211 - 8221.789: 4.7817% ( 214) 00:13:54.947 8221.789 - 8281.367: 6.6976% ( 244) 00:13:54.947 8281.367 - 8340.945: 8.8568% ( 275) 00:13:54.947 8340.945 - 8400.524: 11.1652% ( 294) 00:13:54.947 8400.524 - 8460.102: 13.5757% ( 307) 00:13:54.947 8460.102 - 8519.680: 16.0804% ( 319) 00:13:54.947 8519.680 - 8579.258: 18.6401% ( 326) 00:13:54.947 8579.258 - 8638.836: 21.3960% ( 351) 00:13:54.947 8638.836 - 8698.415: 24.1913% ( 356) 00:13:54.947 8698.415 - 8757.993: 26.9001% ( 345) 00:13:54.947 8757.993 - 8817.571: 29.6796% ( 354) 00:13:54.947 8817.571 - 8877.149: 32.4670% ( 355) 00:13:54.947 8877.149 - 8936.727: 35.1916% ( 347) 00:13:54.947 8936.727 - 8996.305: 37.8926% ( 344) 00:13:54.947 8996.305 - 9055.884: 40.3580% ( 314) 00:13:54.947 9055.884 - 9115.462: 42.5801% ( 283) 00:13:54.947 9115.462 - 9175.040: 44.5195% ( 247) 00:13:54.947 9175.040 - 9234.618: 46.3960% ( 239) 00:13:54.947 9234.618 - 9294.196: 48.3433% ( 248) 00:13:54.947 9294.196 - 9353.775: 50.3298% ( 253) 00:13:54.947 9353.775 - 9413.353: 52.4812% ( 274) 00:13:54.947 9413.353 - 9472.931: 54.5462% ( 263) 00:13:54.947 9472.931 - 9532.509: 56.5170% ( 251) 00:13:54.947 9532.509 - 9592.087: 58.5349% ( 257) 00:13:54.947 9592.087 - 9651.665: 60.4899% ( 249) 00:13:54.947 9651.665 - 9711.244: 62.3430% ( 236) 00:13:54.947 9711.244 - 9770.822: 64.2117% ( 238) 00:13:54.947 9770.822 - 9830.400: 65.9469% ( 221) 00:13:54.947 9830.400 - 9889.978: 67.6508% ( 217) 00:13:54.947 9889.978 - 9949.556: 69.2996% ( 210) 00:13:54.947 9949.556 - 10009.135: 70.8621% ( 199) 00:13:54.947 10009.135 - 10068.713: 72.2990% ( 183) 00:13:54.947 10068.713 - 10128.291: 73.6181% ( 168) 00:13:54.947 10128.291 - 10187.869: 74.8665% ( 159) 00:13:54.947 10187.869 - 10247.447: 75.9736% ( 141) 00:13:54.947 10247.447 - 10307.025: 76.9551% ( 125) 00:13:54.947 10307.025 - 10366.604: 77.6853% ( 93) 00:13:54.947 10366.604 - 10426.182: 78.3134% ( 80) 00:13:54.947 10426.182 - 10485.760: 78.9180% ( 77) 00:13:54.947 10485.760 - 10545.338: 79.4284% ( 65) 00:13:54.947 10545.338 - 10604.916: 79.9623% ( 68) 00:13:54.947 10604.916 - 10664.495: 80.4727% ( 65) 00:13:54.947 10664.495 - 10724.073: 80.9124% ( 56) 00:13:54.947 10724.073 - 10783.651: 81.2421% ( 42) 00:13:54.947 10783.651 - 10843.229: 81.5327% ( 37) 00:13:54.947 10843.229 - 10902.807: 81.7761% ( 31) 00:13:54.947 10902.807 - 10962.385: 81.9645% ( 24) 00:13:54.947 10962.385 - 11021.964: 82.1058% ( 18) 00:13:54.947 11021.964 - 11081.542: 82.2393% ( 17) 00:13:54.947 11081.542 - 11141.120: 82.3728% ( 17) 00:13:54.947 11141.120 - 11200.698: 82.4984% ( 16) 00:13:54.947 11200.698 - 11260.276: 82.6084% ( 14) 00:13:54.947 11260.276 - 11319.855: 82.7104% ( 13) 00:13:54.947 11319.855 - 11379.433: 82.8046% ( 12) 00:13:54.947 11379.433 - 11439.011: 82.8989% ( 12) 00:13:54.947 11439.011 - 11498.589: 82.9931% ( 12) 00:13:54.947 11498.589 - 11558.167: 83.0952% ( 13) 00:13:54.947 11558.167 - 11617.745: 83.2286% ( 17) 00:13:54.947 11617.745 - 11677.324: 83.3386% ( 14) 00:13:54.947 11677.324 - 11736.902: 83.4485% ( 14) 00:13:54.947 11736.902 - 11796.480: 83.5427% ( 12) 00:13:54.947 11796.480 - 11856.058: 83.6134% ( 9) 00:13:54.947 11856.058 - 11915.636: 83.7155% ( 13) 00:13:54.947 11915.636 - 11975.215: 83.8411% ( 16) 00:13:54.947 11975.215 - 12034.793: 84.1080% ( 34) 00:13:54.947 12034.793 - 12094.371: 84.4457% ( 43) 00:13:54.947 12094.371 - 12153.949: 84.8383% ( 50) 00:13:54.947 12153.949 - 12213.527: 85.2937% ( 58) 00:13:54.947 12213.527 - 12273.105: 85.7412% ( 57) 00:13:54.947 12273.105 - 12332.684: 86.2987% ( 71) 00:13:54.947 12332.684 - 12392.262: 86.7227% ( 54) 00:13:54.947 12392.262 - 12451.840: 87.1781% ( 58) 00:13:54.947 12451.840 - 12511.418: 87.6413% ( 59) 00:13:54.947 12511.418 - 12570.996: 88.1203% ( 61) 00:13:54.947 12570.996 - 12630.575: 88.6149% ( 63) 00:13:54.947 12630.575 - 12690.153: 89.1175% ( 64) 00:13:54.947 12690.153 - 12749.731: 89.6043% ( 62) 00:13:54.947 12749.731 - 12809.309: 90.0989% ( 63) 00:13:54.947 12809.309 - 12868.887: 90.5779% ( 61) 00:13:54.947 12868.887 - 12928.465: 91.0490% ( 60) 00:13:54.947 12928.465 - 12988.044: 91.4887% ( 56) 00:13:54.947 12988.044 - 13047.622: 91.9598% ( 60) 00:13:54.947 13047.622 - 13107.200: 92.3995% ( 56) 00:13:54.947 13107.200 - 13166.778: 92.8706% ( 60) 00:13:54.947 13166.778 - 13226.356: 93.3574% ( 62) 00:13:54.947 13226.356 - 13285.935: 93.7657% ( 52) 00:13:54.947 13285.935 - 13345.513: 94.1583% ( 50) 00:13:54.947 13345.513 - 13405.091: 94.6058% ( 57) 00:13:54.947 13405.091 - 13464.669: 95.0298% ( 54) 00:13:54.947 13464.669 - 13524.247: 95.4381% ( 52) 00:13:54.947 13524.247 - 13583.825: 95.8464% ( 52) 00:13:54.947 13583.825 - 13643.404: 96.1526% ( 39) 00:13:54.947 13643.404 - 13702.982: 96.4510% ( 38) 00:13:54.947 13702.982 - 13762.560: 96.6552% ( 26) 00:13:54.947 13762.560 - 13822.138: 96.7808% ( 16) 00:13:54.947 13822.138 - 13881.716: 96.8750% ( 12) 00:13:54.947 13881.716 - 13941.295: 96.9614% ( 11) 00:13:54.947 13941.295 - 14000.873: 97.0556% ( 12) 00:13:54.947 14000.873 - 14060.451: 97.1106% ( 7) 00:13:54.947 14060.451 - 14120.029: 97.1734% ( 8) 00:13:54.947 14120.029 - 14179.607: 97.2440% ( 9) 00:13:54.947 14179.607 - 14239.185: 97.3226% ( 10) 00:13:54.947 14239.185 - 14298.764: 97.3775% ( 7) 00:13:54.947 14298.764 - 14358.342: 97.4246% ( 6) 00:13:54.947 14358.342 - 14417.920: 97.5031% ( 10) 00:13:54.947 14417.920 - 14477.498: 97.5660% ( 8) 00:13:54.947 14477.498 - 14537.076: 97.6445% ( 10) 00:13:54.947 14537.076 - 14596.655: 97.7151% ( 9) 00:13:54.947 14596.655 - 14656.233: 97.7937% ( 10) 00:13:54.947 14656.233 - 14715.811: 97.8722% ( 10) 00:13:54.947 14715.811 - 14775.389: 97.9350% ( 8) 00:13:54.947 14775.389 - 14834.967: 97.9899% ( 7) 00:13:54.947 14834.967 - 14894.545: 98.0449% ( 7) 00:13:54.947 14894.545 - 14954.124: 98.1234% ( 10) 00:13:54.947 14954.124 - 15013.702: 98.2019% ( 10) 00:13:54.947 15013.702 - 15073.280: 98.2805% ( 10) 00:13:54.947 15073.280 - 15132.858: 98.3590% ( 10) 00:13:54.947 15132.858 - 15192.436: 98.4532% ( 12) 00:13:54.947 15192.436 - 15252.015: 98.5317% ( 10) 00:13:54.947 15252.015 - 15371.171: 98.6652% ( 17) 00:13:54.947 15371.171 - 15490.327: 98.7673% ( 13) 00:13:54.947 15490.327 - 15609.484: 98.8301% ( 8) 00:13:54.947 15609.484 - 15728.640: 98.8772% ( 6) 00:13:54.947 15728.640 - 15847.796: 98.9322% ( 7) 00:13:54.948 15847.796 - 15966.953: 98.9871% ( 7) 00:13:54.948 15966.953 - 16086.109: 98.9950% ( 1) 00:13:54.948 24665.367 - 24784.524: 99.0028% ( 1) 00:13:54.948 24784.524 - 24903.680: 99.0185% ( 2) 00:13:54.948 24903.680 - 25022.836: 99.0421% ( 3) 00:13:54.948 25022.836 - 25141.993: 99.0656% ( 3) 00:13:54.948 25141.993 - 25261.149: 99.0892% ( 3) 00:13:54.948 25261.149 - 25380.305: 99.1128% ( 3) 00:13:54.948 25380.305 - 25499.462: 99.1285% ( 2) 00:13:54.948 25499.462 - 25618.618: 99.1442% ( 2) 00:13:54.948 25618.618 - 25737.775: 99.1756% ( 4) 00:13:54.948 25737.775 - 25856.931: 99.1991% ( 3) 00:13:54.948 25856.931 - 25976.087: 99.2148% ( 2) 00:13:54.948 25976.087 - 26095.244: 99.2384% ( 3) 00:13:54.948 26095.244 - 26214.400: 99.2619% ( 3) 00:13:54.948 26214.400 - 26333.556: 99.2776% ( 2) 00:13:54.948 26333.556 - 26452.713: 99.3012% ( 3) 00:13:54.948 26452.713 - 26571.869: 99.3247% ( 3) 00:13:54.948 26571.869 - 26691.025: 99.3483% ( 3) 00:13:54.948 26691.025 - 26810.182: 99.3719% ( 3) 00:13:54.948 26810.182 - 26929.338: 99.3876% ( 2) 00:13:54.948 26929.338 - 27048.495: 99.4111% ( 3) 00:13:54.948 27048.495 - 27167.651: 99.4347% ( 3) 00:13:54.948 27167.651 - 27286.807: 99.4582% ( 3) 00:13:54.948 27286.807 - 27405.964: 99.4818% ( 3) 00:13:54.948 27405.964 - 27525.120: 99.4975% ( 2) 00:13:54.948 32887.156 - 33125.469: 99.5053% ( 1) 00:13:54.948 33125.469 - 33363.782: 99.5446% ( 5) 00:13:54.948 33363.782 - 33602.095: 99.5917% ( 6) 00:13:54.948 33602.095 - 33840.407: 99.6310% ( 5) 00:13:54.948 33840.407 - 34078.720: 99.6781% ( 6) 00:13:54.948 34078.720 - 34317.033: 99.7252% ( 6) 00:13:54.948 34317.033 - 34555.345: 99.7644% ( 5) 00:13:54.948 34555.345 - 34793.658: 99.8116% ( 6) 00:13:54.948 34793.658 - 35031.971: 99.8587% ( 6) 00:13:54.948 35031.971 - 35270.284: 99.8979% ( 5) 00:13:54.948 35270.284 - 35508.596: 99.9450% ( 6) 00:13:54.948 35508.596 - 35746.909: 99.9843% ( 5) 00:13:54.948 35746.909 - 35985.222: 100.0000% ( 2) 00:13:54.948 00:13:54.948 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:54.948 ============================================================================== 00:13:54.948 Range in us Cumulative IO count 00:13:54.948 7804.742 - 7864.320: 0.0707% ( 9) 00:13:54.948 7864.320 - 7923.898: 0.1884% ( 15) 00:13:54.948 7923.898 - 7983.476: 0.4083% ( 28) 00:13:54.948 7983.476 - 8043.055: 0.8401% ( 55) 00:13:54.948 8043.055 - 8102.633: 1.7588% ( 117) 00:13:54.948 8102.633 - 8162.211: 3.0308% ( 162) 00:13:54.948 8162.211 - 8221.789: 4.7032% ( 213) 00:13:54.948 8221.789 - 8281.367: 6.6033% ( 242) 00:13:54.948 8281.367 - 8340.945: 8.7861% ( 278) 00:13:54.948 8340.945 - 8400.524: 11.0553% ( 289) 00:13:54.948 8400.524 - 8460.102: 13.4658% ( 307) 00:13:54.948 8460.102 - 8519.680: 15.9862% ( 321) 00:13:54.948 8519.680 - 8579.258: 18.5694% ( 329) 00:13:54.948 8579.258 - 8638.836: 21.1762% ( 332) 00:13:54.948 8638.836 - 8698.415: 23.8929% ( 346) 00:13:54.948 8698.415 - 8757.993: 26.7195% ( 360) 00:13:54.948 8757.993 - 8817.571: 29.4912% ( 353) 00:13:54.948 8817.571 - 8877.149: 32.2786% ( 355) 00:13:54.948 8877.149 - 8936.727: 35.0110% ( 348) 00:13:54.948 8936.727 - 8996.305: 37.7198% ( 345) 00:13:54.948 8996.305 - 9055.884: 40.1303% ( 307) 00:13:54.948 9055.884 - 9115.462: 42.2817% ( 274) 00:13:54.948 9115.462 - 9175.040: 44.2839% ( 255) 00:13:54.948 9175.040 - 9234.618: 46.2626% ( 252) 00:13:54.948 9234.618 - 9294.196: 48.3119% ( 261) 00:13:54.948 9294.196 - 9353.775: 50.3926% ( 265) 00:13:54.948 9353.775 - 9413.353: 52.4733% ( 265) 00:13:54.948 9413.353 - 9472.931: 54.4677% ( 254) 00:13:54.948 9472.931 - 9532.509: 56.4541% ( 253) 00:13:54.948 9532.509 - 9592.087: 58.4171% ( 250) 00:13:54.948 9592.087 - 9651.665: 60.3172% ( 242) 00:13:54.948 9651.665 - 9711.244: 62.2016% ( 240) 00:13:54.948 9711.244 - 9770.822: 63.9761% ( 226) 00:13:54.948 9770.822 - 9830.400: 65.7977% ( 232) 00:13:54.948 9830.400 - 9889.978: 67.5958% ( 229) 00:13:54.948 9889.978 - 9949.556: 69.3546% ( 224) 00:13:54.948 9949.556 - 10009.135: 70.9406% ( 202) 00:13:54.948 10009.135 - 10068.713: 72.3932% ( 185) 00:13:54.948 10068.713 - 10128.291: 73.7202% ( 169) 00:13:54.948 10128.291 - 10187.869: 74.9136% ( 152) 00:13:54.948 10187.869 - 10247.447: 75.9108% ( 127) 00:13:54.948 10247.447 - 10307.025: 76.8138% ( 115) 00:13:54.948 10307.025 - 10366.604: 77.5911% ( 99) 00:13:54.948 10366.604 - 10426.182: 78.2506% ( 84) 00:13:54.948 10426.182 - 10485.760: 78.8160% ( 72) 00:13:54.948 10485.760 - 10545.338: 79.3891% ( 73) 00:13:54.948 10545.338 - 10604.916: 79.9702% ( 74) 00:13:54.948 10604.916 - 10664.495: 80.4727% ( 64) 00:13:54.948 10664.495 - 10724.073: 80.8967% ( 54) 00:13:54.948 10724.073 - 10783.651: 81.2500% ( 45) 00:13:54.948 10783.651 - 10843.229: 81.5641% ( 40) 00:13:54.948 10843.229 - 10902.807: 81.8232% ( 33) 00:13:54.948 10902.807 - 10962.385: 82.0352% ( 27) 00:13:54.948 10962.385 - 11021.964: 82.2158% ( 23) 00:13:54.948 11021.964 - 11081.542: 82.3571% ( 18) 00:13:54.948 11081.542 - 11141.120: 82.4827% ( 16) 00:13:54.948 11141.120 - 11200.698: 82.6162% ( 17) 00:13:54.948 11200.698 - 11260.276: 82.7418% ( 16) 00:13:54.948 11260.276 - 11319.855: 82.8675% ( 16) 00:13:54.948 11319.855 - 11379.433: 82.9538% ( 11) 00:13:54.948 11379.433 - 11439.011: 83.0402% ( 11) 00:13:54.948 11439.011 - 11498.589: 83.1109% ( 9) 00:13:54.948 11498.589 - 11558.167: 83.2129% ( 13) 00:13:54.948 11558.167 - 11617.745: 83.2993% ( 11) 00:13:54.948 11617.745 - 11677.324: 83.4171% ( 15) 00:13:54.948 11677.324 - 11736.902: 83.5192% ( 13) 00:13:54.948 11736.902 - 11796.480: 83.6291% ( 14) 00:13:54.948 11796.480 - 11856.058: 83.7390% ( 14) 00:13:54.948 11856.058 - 11915.636: 83.8646% ( 16) 00:13:54.948 11915.636 - 11975.215: 84.0138% ( 19) 00:13:54.948 11975.215 - 12034.793: 84.2729% ( 33) 00:13:54.948 12034.793 - 12094.371: 84.5791% ( 39) 00:13:54.948 12094.371 - 12153.949: 84.9639% ( 49) 00:13:54.948 12153.949 - 12213.527: 85.4271% ( 59) 00:13:54.948 12213.527 - 12273.105: 85.8668% ( 56) 00:13:54.948 12273.105 - 12332.684: 86.3222% ( 58) 00:13:54.948 12332.684 - 12392.262: 86.7541% ( 55) 00:13:54.948 12392.262 - 12451.840: 87.2095% ( 58) 00:13:54.948 12451.840 - 12511.418: 87.6806% ( 60) 00:13:54.948 12511.418 - 12570.996: 88.1124% ( 55) 00:13:54.948 12570.996 - 12630.575: 88.5600% ( 57) 00:13:54.948 12630.575 - 12690.153: 89.0232% ( 59) 00:13:54.948 12690.153 - 12749.731: 89.4943% ( 60) 00:13:54.948 12749.731 - 12809.309: 89.9419% ( 57) 00:13:54.948 12809.309 - 12868.887: 90.4680% ( 67) 00:13:54.948 12868.887 - 12928.465: 90.9234% ( 58) 00:13:54.948 12928.465 - 12988.044: 91.4102% ( 62) 00:13:54.948 12988.044 - 13047.622: 91.8577% ( 57) 00:13:54.948 13047.622 - 13107.200: 92.3210% ( 59) 00:13:54.948 13107.200 - 13166.778: 92.8078% ( 62) 00:13:54.948 13166.778 - 13226.356: 93.2632% ( 58) 00:13:54.948 13226.356 - 13285.935: 93.7421% ( 61) 00:13:54.948 13285.935 - 13345.513: 94.1818% ( 56) 00:13:54.948 13345.513 - 13405.091: 94.5901% ( 52) 00:13:54.948 13405.091 - 13464.669: 95.0220% ( 55) 00:13:54.948 13464.669 - 13524.247: 95.4931% ( 60) 00:13:54.948 13524.247 - 13583.825: 95.8543% ( 46) 00:13:54.948 13583.825 - 13643.404: 96.1683% ( 40) 00:13:54.948 13643.404 - 13702.982: 96.4274% ( 33) 00:13:54.948 13702.982 - 13762.560: 96.6473% ( 28) 00:13:54.948 13762.560 - 13822.138: 96.8200% ( 22) 00:13:54.948 13822.138 - 13881.716: 96.9300% ( 14) 00:13:54.948 13881.716 - 13941.295: 97.0085% ( 10) 00:13:54.948 13941.295 - 14000.873: 97.0948% ( 11) 00:13:54.948 14000.873 - 14060.451: 97.1891% ( 12) 00:13:54.948 14060.451 - 14120.029: 97.2597% ( 9) 00:13:54.948 14120.029 - 14179.607: 97.3304% ( 9) 00:13:54.949 14179.607 - 14239.185: 97.4089% ( 10) 00:13:54.949 14239.185 - 14298.764: 97.4717% ( 8) 00:13:54.949 14298.764 - 14358.342: 97.5267% ( 7) 00:13:54.949 14358.342 - 14417.920: 97.5738% ( 6) 00:13:54.949 14417.920 - 14477.498: 97.6131% ( 5) 00:13:54.949 14477.498 - 14537.076: 97.6916% ( 10) 00:13:54.949 14537.076 - 14596.655: 97.7622% ( 9) 00:13:54.949 14596.655 - 14656.233: 97.8408% ( 10) 00:13:54.949 14656.233 - 14715.811: 97.9114% ( 9) 00:13:54.949 14715.811 - 14775.389: 97.9821% ( 9) 00:13:54.949 14775.389 - 14834.967: 98.0685% ( 11) 00:13:54.949 14834.967 - 14894.545: 98.1313% ( 8) 00:13:54.949 14894.545 - 14954.124: 98.1862% ( 7) 00:13:54.949 14954.124 - 15013.702: 98.2569% ( 9) 00:13:54.949 15013.702 - 15073.280: 98.3197% ( 8) 00:13:54.949 15073.280 - 15132.858: 98.3668% ( 6) 00:13:54.949 15132.858 - 15192.436: 98.4218% ( 7) 00:13:54.949 15192.436 - 15252.015: 98.4768% ( 7) 00:13:54.949 15252.015 - 15371.171: 98.5788% ( 13) 00:13:54.949 15371.171 - 15490.327: 98.6888% ( 14) 00:13:54.949 15490.327 - 15609.484: 98.7830% ( 12) 00:13:54.949 15609.484 - 15728.640: 98.8301% ( 6) 00:13:54.949 15728.640 - 15847.796: 98.8851% ( 7) 00:13:54.949 15847.796 - 15966.953: 98.9400% ( 7) 00:13:54.949 15966.953 - 16086.109: 98.9950% ( 7) 00:13:54.949 21924.771 - 22043.927: 99.0028% ( 1) 00:13:54.949 22043.927 - 22163.084: 99.0342% ( 4) 00:13:54.949 22163.084 - 22282.240: 99.0578% ( 3) 00:13:54.949 22282.240 - 22401.396: 99.0813% ( 3) 00:13:54.949 22401.396 - 22520.553: 99.0970% ( 2) 00:13:54.949 22520.553 - 22639.709: 99.1206% ( 3) 00:13:54.949 22639.709 - 22758.865: 99.1442% ( 3) 00:13:54.949 22758.865 - 22878.022: 99.1599% ( 2) 00:13:54.949 22878.022 - 22997.178: 99.1834% ( 3) 00:13:54.949 22997.178 - 23116.335: 99.2070% ( 3) 00:13:54.949 23116.335 - 23235.491: 99.2305% ( 3) 00:13:54.949 23235.491 - 23354.647: 99.2541% ( 3) 00:13:54.949 23354.647 - 23473.804: 99.2776% ( 3) 00:13:54.949 23473.804 - 23592.960: 99.3012% ( 3) 00:13:54.949 23592.960 - 23712.116: 99.3169% ( 2) 00:13:54.949 23712.116 - 23831.273: 99.3405% ( 3) 00:13:54.949 23831.273 - 23950.429: 99.3640% ( 3) 00:13:54.949 23950.429 - 24069.585: 99.3876% ( 3) 00:13:54.949 24069.585 - 24188.742: 99.4111% ( 3) 00:13:54.949 24188.742 - 24307.898: 99.4347% ( 3) 00:13:54.949 24307.898 - 24427.055: 99.4582% ( 3) 00:13:54.949 24427.055 - 24546.211: 99.4739% ( 2) 00:13:54.949 24546.211 - 24665.367: 99.4975% ( 3) 00:13:54.949 30265.716 - 30384.873: 99.5132% ( 2) 00:13:54.949 30384.873 - 30504.029: 99.5367% ( 3) 00:13:54.949 30504.029 - 30742.342: 99.5839% ( 6) 00:13:54.949 30742.342 - 30980.655: 99.6310% ( 6) 00:13:54.949 30980.655 - 31218.967: 99.6781% ( 6) 00:13:54.949 31218.967 - 31457.280: 99.7095% ( 4) 00:13:54.949 31457.280 - 31695.593: 99.7487% ( 5) 00:13:54.949 31695.593 - 31933.905: 99.7959% ( 6) 00:13:54.949 31933.905 - 32172.218: 99.8430% ( 6) 00:13:54.949 32172.218 - 32410.531: 99.8901% ( 6) 00:13:54.949 32410.531 - 32648.844: 99.9293% ( 5) 00:13:54.949 32648.844 - 32887.156: 99.9764% ( 6) 00:13:54.949 32887.156 - 33125.469: 100.0000% ( 3) 00:13:54.949 00:13:54.949 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:54.949 ============================================================================== 00:13:54.949 Range in us Cumulative IO count 00:13:54.949 7745.164 - 7804.742: 0.0079% ( 1) 00:13:54.949 7804.742 - 7864.320: 0.0864% ( 10) 00:13:54.949 7864.320 - 7923.898: 0.2434% ( 20) 00:13:54.949 7923.898 - 7983.476: 0.4083% ( 21) 00:13:54.949 7983.476 - 8043.055: 0.8166% ( 52) 00:13:54.949 8043.055 - 8102.633: 1.7745% ( 122) 00:13:54.949 8102.633 - 8162.211: 3.0857% ( 167) 00:13:54.949 8162.211 - 8221.789: 4.6325% ( 197) 00:13:54.949 8221.789 - 8281.367: 6.5798% ( 248) 00:13:54.949 8281.367 - 8340.945: 8.7861% ( 281) 00:13:54.949 8340.945 - 8400.524: 11.0553% ( 289) 00:13:54.949 8400.524 - 8460.102: 13.5364% ( 316) 00:13:54.949 8460.102 - 8519.680: 16.1589% ( 334) 00:13:54.949 8519.680 - 8579.258: 18.8128% ( 338) 00:13:54.949 8579.258 - 8638.836: 21.5138% ( 344) 00:13:54.949 8638.836 - 8698.415: 24.3640% ( 363) 00:13:54.949 8698.415 - 8757.993: 27.0886% ( 347) 00:13:54.949 8757.993 - 8817.571: 29.8838% ( 356) 00:13:54.949 8817.571 - 8877.149: 32.5534% ( 340) 00:13:54.949 8877.149 - 8936.727: 35.2308% ( 341) 00:13:54.949 8936.727 - 8996.305: 37.8612% ( 335) 00:13:54.949 8996.305 - 9055.884: 40.2167% ( 300) 00:13:54.949 9055.884 - 9115.462: 42.3524% ( 272) 00:13:54.949 9115.462 - 9175.040: 44.3938% ( 260) 00:13:54.949 9175.040 - 9234.618: 46.3489% ( 249) 00:13:54.949 9234.618 - 9294.196: 48.3197% ( 251) 00:13:54.949 9294.196 - 9353.775: 50.3455% ( 258) 00:13:54.949 9353.775 - 9413.353: 52.4026% ( 262) 00:13:54.949 9413.353 - 9472.931: 54.4048% ( 255) 00:13:54.949 9472.931 - 9532.509: 56.3285% ( 245) 00:13:54.949 9532.509 - 9592.087: 58.2208% ( 241) 00:13:54.949 9592.087 - 9651.665: 60.0345% ( 231) 00:13:54.949 9651.665 - 9711.244: 61.8012% ( 225) 00:13:54.949 9711.244 - 9770.822: 63.6071% ( 230) 00:13:54.949 9770.822 - 9830.400: 65.4523% ( 235) 00:13:54.949 9830.400 - 9889.978: 67.2660% ( 231) 00:13:54.949 9889.978 - 9949.556: 69.0170% ( 223) 00:13:54.949 9949.556 - 10009.135: 70.6109% ( 203) 00:13:54.949 10009.135 - 10068.713: 72.1498% ( 196) 00:13:54.949 10068.713 - 10128.291: 73.5396% ( 177) 00:13:54.949 10128.291 - 10187.869: 74.7252% ( 151) 00:13:54.949 10187.869 - 10247.447: 75.7224% ( 127) 00:13:54.949 10247.447 - 10307.025: 76.6096% ( 113) 00:13:54.949 10307.025 - 10366.604: 77.3398% ( 93) 00:13:54.949 10366.604 - 10426.182: 77.9915% ( 83) 00:13:54.949 10426.182 - 10485.760: 78.6275% ( 81) 00:13:54.949 10485.760 - 10545.338: 79.2085% ( 74) 00:13:54.949 10545.338 - 10604.916: 79.7268% ( 66) 00:13:54.949 10604.916 - 10664.495: 80.1822% ( 58) 00:13:54.949 10664.495 - 10724.073: 80.6376% ( 58) 00:13:54.949 10724.073 - 10783.651: 81.0694% ( 55) 00:13:54.949 10783.651 - 10843.229: 81.4149% ( 44) 00:13:54.949 10843.229 - 10902.807: 81.6897% ( 35) 00:13:54.949 10902.807 - 10962.385: 81.9253% ( 30) 00:13:54.949 10962.385 - 11021.964: 82.1372% ( 27) 00:13:54.949 11021.964 - 11081.542: 82.3414% ( 26) 00:13:54.949 11081.542 - 11141.120: 82.5063% ( 21) 00:13:54.949 11141.120 - 11200.698: 82.6555% ( 19) 00:13:54.949 11200.698 - 11260.276: 82.8046% ( 19) 00:13:54.949 11260.276 - 11319.855: 82.9303% ( 16) 00:13:54.949 11319.855 - 11379.433: 83.0638% ( 17) 00:13:54.949 11379.433 - 11439.011: 83.1972% ( 17) 00:13:54.949 11439.011 - 11498.589: 83.3307% ( 17) 00:13:54.949 11498.589 - 11558.167: 83.4406% ( 14) 00:13:54.949 11558.167 - 11617.745: 83.5427% ( 13) 00:13:54.949 11617.745 - 11677.324: 83.6369% ( 12) 00:13:54.949 11677.324 - 11736.902: 83.7233% ( 11) 00:13:54.949 11736.902 - 11796.480: 83.8175% ( 12) 00:13:54.949 11796.480 - 11856.058: 83.9117% ( 12) 00:13:54.949 11856.058 - 11915.636: 83.9981% ( 11) 00:13:54.949 11915.636 - 11975.215: 84.1080% ( 14) 00:13:54.949 11975.215 - 12034.793: 84.3514% ( 31) 00:13:54.949 12034.793 - 12094.371: 84.6577% ( 39) 00:13:54.949 12094.371 - 12153.949: 85.0581% ( 51) 00:13:54.949 12153.949 - 12213.527: 85.5135% ( 58) 00:13:54.949 12213.527 - 12273.105: 85.9768% ( 59) 00:13:54.949 12273.105 - 12332.684: 86.4400% ( 59) 00:13:54.949 12332.684 - 12392.262: 86.8876% ( 57) 00:13:54.949 12392.262 - 12451.840: 87.3116% ( 54) 00:13:54.949 12451.840 - 12511.418: 87.7356% ( 54) 00:13:54.949 12511.418 - 12570.996: 88.2145% ( 61) 00:13:54.949 12570.996 - 12630.575: 88.6856% ( 60) 00:13:54.950 12630.575 - 12690.153: 89.1489% ( 59) 00:13:54.950 12690.153 - 12749.731: 89.6435% ( 63) 00:13:54.950 12749.731 - 12809.309: 90.0832% ( 56) 00:13:54.950 12809.309 - 12868.887: 90.5543% ( 60) 00:13:54.950 12868.887 - 12928.465: 90.9862% ( 55) 00:13:54.950 12928.465 - 12988.044: 91.4651% ( 61) 00:13:54.950 12988.044 - 13047.622: 91.8813% ( 53) 00:13:54.950 13047.622 - 13107.200: 92.3445% ( 59) 00:13:54.950 13107.200 - 13166.778: 92.7685% ( 54) 00:13:54.950 13166.778 - 13226.356: 93.2239% ( 58) 00:13:54.950 13226.356 - 13285.935: 93.6322% ( 52) 00:13:54.950 13285.935 - 13345.513: 94.0719% ( 56) 00:13:54.950 13345.513 - 13405.091: 94.4802% ( 52) 00:13:54.950 13405.091 - 13464.669: 94.9356% ( 58) 00:13:54.950 13464.669 - 13524.247: 95.4538% ( 66) 00:13:54.950 13524.247 - 13583.825: 95.8464% ( 50) 00:13:54.950 13583.825 - 13643.404: 96.1840% ( 43) 00:13:54.950 13643.404 - 13702.982: 96.4274% ( 31) 00:13:54.950 13702.982 - 13762.560: 96.6394% ( 27) 00:13:54.950 13762.560 - 13822.138: 96.7494% ( 14) 00:13:54.950 13822.138 - 13881.716: 96.8436% ( 12) 00:13:54.950 13881.716 - 13941.295: 96.9378% ( 12) 00:13:54.950 13941.295 - 14000.873: 97.0242% ( 11) 00:13:54.950 14000.873 - 14060.451: 97.1498% ( 16) 00:13:54.950 14060.451 - 14120.029: 97.2597% ( 14) 00:13:54.950 14120.029 - 14179.607: 97.3618% ( 13) 00:13:54.950 14179.607 - 14239.185: 97.4639% ( 13) 00:13:54.950 14239.185 - 14298.764: 97.5345% ( 9) 00:13:54.950 14298.764 - 14358.342: 97.5817% ( 6) 00:13:54.950 14358.342 - 14417.920: 97.6209% ( 5) 00:13:54.950 14417.920 - 14477.498: 97.6680% ( 6) 00:13:54.950 14477.498 - 14537.076: 97.7151% ( 6) 00:13:54.950 14537.076 - 14596.655: 97.7544% ( 5) 00:13:54.950 14596.655 - 14656.233: 97.8015% ( 6) 00:13:54.950 14656.233 - 14715.811: 97.8565% ( 7) 00:13:54.950 14715.811 - 14775.389: 97.9114% ( 7) 00:13:54.950 14775.389 - 14834.967: 97.9742% ( 8) 00:13:54.950 14834.967 - 14894.545: 98.0292% ( 7) 00:13:54.950 14894.545 - 14954.124: 98.0842% ( 7) 00:13:54.950 14954.124 - 15013.702: 98.1234% ( 5) 00:13:54.950 15013.702 - 15073.280: 98.1941% ( 9) 00:13:54.950 15073.280 - 15132.858: 98.2962% ( 13) 00:13:54.950 15132.858 - 15192.436: 98.3747% ( 10) 00:13:54.950 15192.436 - 15252.015: 98.4532% ( 10) 00:13:54.950 15252.015 - 15371.171: 98.5631% ( 14) 00:13:54.950 15371.171 - 15490.327: 98.6731% ( 14) 00:13:54.950 15490.327 - 15609.484: 98.7280% ( 7) 00:13:54.950 15609.484 - 15728.640: 98.7673% ( 5) 00:13:54.950 15728.640 - 15847.796: 98.8144% ( 6) 00:13:54.950 15847.796 - 15966.953: 98.8615% ( 6) 00:13:54.950 15966.953 - 16086.109: 98.9165% ( 7) 00:13:54.950 16086.109 - 16205.265: 98.9714% ( 7) 00:13:54.950 16205.265 - 16324.422: 98.9950% ( 3) 00:13:54.950 18945.862 - 19065.018: 99.0107% ( 2) 00:13:54.950 19065.018 - 19184.175: 99.0264% ( 2) 00:13:54.950 19184.175 - 19303.331: 99.0499% ( 3) 00:13:54.950 19303.331 - 19422.487: 99.0735% ( 3) 00:13:54.950 19422.487 - 19541.644: 99.0970% ( 3) 00:13:54.950 19541.644 - 19660.800: 99.1206% ( 3) 00:13:54.950 19660.800 - 19779.956: 99.1442% ( 3) 00:13:54.950 19779.956 - 19899.113: 99.1599% ( 2) 00:13:54.950 19899.113 - 20018.269: 99.1834% ( 3) 00:13:54.950 20018.269 - 20137.425: 99.2070% ( 3) 00:13:54.950 20137.425 - 20256.582: 99.2227% ( 2) 00:13:54.950 20256.582 - 20375.738: 99.2462% ( 3) 00:13:54.950 20375.738 - 20494.895: 99.2698% ( 3) 00:13:54.950 20494.895 - 20614.051: 99.2933% ( 3) 00:13:54.950 20614.051 - 20733.207: 99.3169% ( 3) 00:13:54.950 20733.207 - 20852.364: 99.3405% ( 3) 00:13:54.950 20852.364 - 20971.520: 99.3640% ( 3) 00:13:54.950 20971.520 - 21090.676: 99.3876% ( 3) 00:13:54.950 21090.676 - 21209.833: 99.4111% ( 3) 00:13:54.950 21209.833 - 21328.989: 99.4268% ( 2) 00:13:54.950 21328.989 - 21448.145: 99.4504% ( 3) 00:13:54.950 21448.145 - 21567.302: 99.4739% ( 3) 00:13:54.950 21567.302 - 21686.458: 99.4896% ( 2) 00:13:54.950 21686.458 - 21805.615: 99.4975% ( 1) 00:13:54.950 27286.807 - 27405.964: 99.5053% ( 1) 00:13:54.950 27405.964 - 27525.120: 99.5210% ( 2) 00:13:54.950 27525.120 - 27644.276: 99.5446% ( 3) 00:13:54.950 27644.276 - 27763.433: 99.5682% ( 3) 00:13:54.950 27763.433 - 27882.589: 99.5917% ( 3) 00:13:54.950 27882.589 - 28001.745: 99.6153% ( 3) 00:13:54.950 28001.745 - 28120.902: 99.6310% ( 2) 00:13:54.950 28120.902 - 28240.058: 99.6545% ( 3) 00:13:54.950 28240.058 - 28359.215: 99.6702% ( 2) 00:13:54.950 28478.371 - 28597.527: 99.6859% ( 2) 00:13:54.950 28597.527 - 28716.684: 99.7095% ( 3) 00:13:54.950 28716.684 - 28835.840: 99.7330% ( 3) 00:13:54.950 28835.840 - 28954.996: 99.7566% ( 3) 00:13:54.950 28954.996 - 29074.153: 99.7802% ( 3) 00:13:54.950 29074.153 - 29193.309: 99.8037% ( 3) 00:13:54.950 29193.309 - 29312.465: 99.8194% ( 2) 00:13:54.950 29312.465 - 29431.622: 99.8430% ( 3) 00:13:54.950 29431.622 - 29550.778: 99.8665% ( 3) 00:13:54.950 29550.778 - 29669.935: 99.8901% ( 3) 00:13:54.950 29669.935 - 29789.091: 99.9058% ( 2) 00:13:54.950 29789.091 - 29908.247: 99.9293% ( 3) 00:13:54.950 29908.247 - 30027.404: 99.9450% ( 2) 00:13:54.950 30027.404 - 30146.560: 99.9686% ( 3) 00:13:54.950 30146.560 - 30265.716: 99.9843% ( 2) 00:13:54.950 30265.716 - 30384.873: 100.0000% ( 2) 00:13:54.950 00:13:54.950 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:54.950 ============================================================================== 00:13:54.950 Range in us Cumulative IO count 00:13:54.950 7745.164 - 7804.742: 0.0079% ( 1) 00:13:54.950 7804.742 - 7864.320: 0.0471% ( 5) 00:13:54.950 7864.320 - 7923.898: 0.1806% ( 17) 00:13:54.950 7923.898 - 7983.476: 0.3926% ( 27) 00:13:54.950 7983.476 - 8043.055: 0.7930% ( 51) 00:13:54.950 8043.055 - 8102.633: 1.5861% ( 101) 00:13:54.950 8102.633 - 8162.211: 2.8266% ( 158) 00:13:54.950 8162.211 - 8221.789: 4.4834% ( 211) 00:13:54.950 8221.789 - 8281.367: 6.4227% ( 247) 00:13:54.950 8281.367 - 8340.945: 8.6840% ( 288) 00:13:54.950 8340.945 - 8400.524: 11.1024% ( 308) 00:13:54.950 8400.524 - 8460.102: 13.5600% ( 313) 00:13:54.950 8460.102 - 8519.680: 16.1746% ( 333) 00:13:54.950 8519.680 - 8579.258: 18.9070% ( 348) 00:13:54.950 8579.258 - 8638.836: 21.6787% ( 353) 00:13:54.950 8638.836 - 8698.415: 24.4111% ( 348) 00:13:54.950 8698.415 - 8757.993: 27.2142% ( 357) 00:13:54.950 8757.993 - 8817.571: 30.0251% ( 358) 00:13:54.950 8817.571 - 8877.149: 32.7889% ( 352) 00:13:54.950 8877.149 - 8936.727: 35.4507% ( 339) 00:13:54.950 8936.727 - 8996.305: 38.1046% ( 338) 00:13:54.950 8996.305 - 9055.884: 40.5936% ( 317) 00:13:54.950 9055.884 - 9115.462: 42.7528% ( 275) 00:13:54.950 9115.462 - 9175.040: 44.6137% ( 237) 00:13:54.950 9175.040 - 9234.618: 46.6159% ( 255) 00:13:54.950 9234.618 - 9294.196: 48.6966% ( 265) 00:13:54.950 9294.196 - 9353.775: 50.6046% ( 243) 00:13:54.950 9353.775 - 9413.353: 52.5440% ( 247) 00:13:54.950 9413.353 - 9472.931: 54.4677% ( 245) 00:13:54.950 9472.931 - 9532.509: 56.3285% ( 237) 00:13:54.950 9532.509 - 9592.087: 58.1423% ( 231) 00:13:54.950 9592.087 - 9651.665: 59.9089% ( 225) 00:13:54.950 9651.665 - 9711.244: 61.6913% ( 227) 00:13:54.950 9711.244 - 9770.822: 63.5521% ( 237) 00:13:54.950 9770.822 - 9830.400: 65.3423% ( 228) 00:13:54.950 9830.400 - 9889.978: 67.1011% ( 224) 00:13:54.950 9889.978 - 9949.556: 68.7579% ( 211) 00:13:54.950 9949.556 - 10009.135: 70.3046% ( 197) 00:13:54.950 10009.135 - 10068.713: 71.8279% ( 194) 00:13:54.950 10068.713 - 10128.291: 73.2098% ( 176) 00:13:54.950 10128.291 - 10187.869: 74.4190% ( 154) 00:13:54.950 10187.869 - 10247.447: 75.3376% ( 117) 00:13:54.950 10247.447 - 10307.025: 76.1935% ( 109) 00:13:54.950 10307.025 - 10366.604: 76.9943% ( 102) 00:13:54.950 10366.604 - 10426.182: 77.7246% ( 93) 00:13:54.950 10426.182 - 10485.760: 78.3920% ( 85) 00:13:54.950 10485.760 - 10545.338: 78.9808% ( 75) 00:13:54.950 10545.338 - 10604.916: 79.4912% ( 65) 00:13:54.950 10604.916 - 10664.495: 79.9545% ( 59) 00:13:54.950 10664.495 - 10724.073: 80.4413% ( 62) 00:13:54.950 10724.073 - 10783.651: 80.8810% ( 56) 00:13:54.950 10783.651 - 10843.229: 81.2736% ( 50) 00:13:54.950 10843.229 - 10902.807: 81.6190% ( 44) 00:13:54.950 10902.807 - 10962.385: 81.8938% ( 35) 00:13:54.950 10962.385 - 11021.964: 82.1058% ( 27) 00:13:54.950 11021.964 - 11081.542: 82.2864% ( 23) 00:13:54.950 11081.542 - 11141.120: 82.4670% ( 23) 00:13:54.950 11141.120 - 11200.698: 82.6319% ( 21) 00:13:54.950 11200.698 - 11260.276: 82.8125% ( 23) 00:13:54.950 11260.276 - 11319.855: 82.9695% ( 20) 00:13:54.950 11319.855 - 11379.433: 83.0952% ( 16) 00:13:54.950 11379.433 - 11439.011: 83.2286% ( 17) 00:13:54.950 11439.011 - 11498.589: 83.3464% ( 15) 00:13:54.950 11498.589 - 11558.167: 83.4799% ( 17) 00:13:54.950 11558.167 - 11617.745: 83.5820% ( 13) 00:13:54.950 11617.745 - 11677.324: 83.6840% ( 13) 00:13:54.950 11677.324 - 11736.902: 83.7940% ( 14) 00:13:54.950 11736.902 - 11796.480: 83.8803% ( 11) 00:13:54.951 11796.480 - 11856.058: 83.9589% ( 10) 00:13:54.951 11856.058 - 11915.636: 84.0374% ( 10) 00:13:54.951 11915.636 - 11975.215: 84.1630% ( 16) 00:13:54.951 11975.215 - 12034.793: 84.4300% ( 34) 00:13:54.951 12034.793 - 12094.371: 84.7205% ( 37) 00:13:54.951 12094.371 - 12153.949: 85.0895% ( 47) 00:13:54.951 12153.949 - 12213.527: 85.5057% ( 53) 00:13:54.951 12213.527 - 12273.105: 85.9454% ( 56) 00:13:54.951 12273.105 - 12332.684: 86.3693% ( 54) 00:13:54.951 12332.684 - 12392.262: 86.8012% ( 55) 00:13:54.951 12392.262 - 12451.840: 87.2095% ( 52) 00:13:54.951 12451.840 - 12511.418: 87.5864% ( 48) 00:13:54.951 12511.418 - 12570.996: 88.0339% ( 57) 00:13:54.951 12570.996 - 12630.575: 88.4344% ( 51) 00:13:54.951 12630.575 - 12690.153: 88.9369% ( 64) 00:13:54.951 12690.153 - 12749.731: 89.3687% ( 55) 00:13:54.951 12749.731 - 12809.309: 89.8869% ( 66) 00:13:54.951 12809.309 - 12868.887: 90.3423% ( 58) 00:13:54.951 12868.887 - 12928.465: 90.8291% ( 62) 00:13:54.951 12928.465 - 12988.044: 91.3474% ( 66) 00:13:54.951 12988.044 - 13047.622: 91.8499% ( 64) 00:13:54.951 13047.622 - 13107.200: 92.3210% ( 60) 00:13:54.951 13107.200 - 13166.778: 92.8392% ( 66) 00:13:54.951 13166.778 - 13226.356: 93.3103% ( 60) 00:13:54.951 13226.356 - 13285.935: 93.7814% ( 60) 00:13:54.951 13285.935 - 13345.513: 94.2447% ( 59) 00:13:54.951 13345.513 - 13405.091: 94.7158% ( 60) 00:13:54.951 13405.091 - 13464.669: 95.1633% ( 57) 00:13:54.951 13464.669 - 13524.247: 95.5952% ( 55) 00:13:54.951 13524.247 - 13583.825: 96.0427% ( 57) 00:13:54.951 13583.825 - 13643.404: 96.3803% ( 43) 00:13:54.951 13643.404 - 13702.982: 96.6473% ( 34) 00:13:54.951 13702.982 - 13762.560: 96.8514% ( 26) 00:13:54.951 13762.560 - 13822.138: 97.0006% ( 19) 00:13:54.951 13822.138 - 13881.716: 97.1106% ( 14) 00:13:54.951 13881.716 - 13941.295: 97.2126% ( 13) 00:13:54.951 13941.295 - 14000.873: 97.3226% ( 14) 00:13:54.951 14000.873 - 14060.451: 97.3854% ( 8) 00:13:54.951 14060.451 - 14120.029: 97.4403% ( 7) 00:13:54.951 14120.029 - 14179.607: 97.4953% ( 7) 00:13:54.951 14179.607 - 14239.185: 97.5267% ( 4) 00:13:54.951 14239.185 - 14298.764: 97.5660% ( 5) 00:13:54.951 14298.764 - 14358.342: 97.5895% ( 3) 00:13:54.951 14358.342 - 14417.920: 97.6131% ( 3) 00:13:54.951 14417.920 - 14477.498: 97.6288% ( 2) 00:13:54.951 14477.498 - 14537.076: 97.6523% ( 3) 00:13:54.951 14537.076 - 14596.655: 97.6916% ( 5) 00:13:54.951 14596.655 - 14656.233: 97.7387% ( 6) 00:13:54.951 14656.233 - 14715.811: 97.7937% ( 7) 00:13:54.951 14715.811 - 14775.389: 97.8565% ( 8) 00:13:54.951 14775.389 - 14834.967: 97.9114% ( 7) 00:13:54.951 14834.967 - 14894.545: 97.9507% ( 5) 00:13:54.951 14894.545 - 14954.124: 97.9978% ( 6) 00:13:54.951 14954.124 - 15013.702: 98.0528% ( 7) 00:13:54.951 15013.702 - 15073.280: 98.0999% ( 6) 00:13:54.951 15073.280 - 15132.858: 98.1313% ( 4) 00:13:54.951 15132.858 - 15192.436: 98.2177% ( 11) 00:13:54.951 15192.436 - 15252.015: 98.2883% ( 9) 00:13:54.951 15252.015 - 15371.171: 98.4532% ( 21) 00:13:54.951 15371.171 - 15490.327: 98.6024% ( 19) 00:13:54.951 15490.327 - 15609.484: 98.6888% ( 11) 00:13:54.951 15609.484 - 15728.640: 98.7594% ( 9) 00:13:54.951 15728.640 - 15847.796: 98.8144% ( 7) 00:13:54.951 15847.796 - 15966.953: 98.8615% ( 6) 00:13:54.951 15966.953 - 16086.109: 98.9086% ( 6) 00:13:54.951 16086.109 - 16205.265: 98.9636% ( 7) 00:13:54.951 16205.265 - 16324.422: 98.9950% ( 4) 00:13:54.951 16324.422 - 16443.578: 99.0185% ( 3) 00:13:54.951 16443.578 - 16562.735: 99.0421% ( 3) 00:13:54.951 16562.735 - 16681.891: 99.0656% ( 3) 00:13:54.951 16681.891 - 16801.047: 99.0892% ( 3) 00:13:54.951 16801.047 - 16920.204: 99.1049% ( 2) 00:13:54.951 16920.204 - 17039.360: 99.1285% ( 3) 00:13:54.951 17039.360 - 17158.516: 99.1520% ( 3) 00:13:54.951 17158.516 - 17277.673: 99.1756% ( 3) 00:13:54.951 17277.673 - 17396.829: 99.1991% ( 3) 00:13:54.951 17396.829 - 17515.985: 99.2227% ( 3) 00:13:54.951 17515.985 - 17635.142: 99.2462% ( 3) 00:13:54.951 17635.142 - 17754.298: 99.2619% ( 2) 00:13:54.951 17754.298 - 17873.455: 99.2855% ( 3) 00:13:54.951 17873.455 - 17992.611: 99.3090% ( 3) 00:13:54.951 17992.611 - 18111.767: 99.3326% ( 3) 00:13:54.951 18111.767 - 18230.924: 99.3562% ( 3) 00:13:54.951 18230.924 - 18350.080: 99.3797% ( 3) 00:13:54.951 18350.080 - 18469.236: 99.3954% ( 2) 00:13:54.951 18469.236 - 18588.393: 99.4190% ( 3) 00:13:54.951 18588.393 - 18707.549: 99.4425% ( 3) 00:13:54.951 18707.549 - 18826.705: 99.4661% ( 3) 00:13:54.951 18826.705 - 18945.862: 99.4896% ( 3) 00:13:54.951 18945.862 - 19065.018: 99.4975% ( 1) 00:13:54.951 24546.211 - 24665.367: 99.5053% ( 1) 00:13:54.951 24665.367 - 24784.524: 99.5289% ( 3) 00:13:54.951 24784.524 - 24903.680: 99.5524% ( 3) 00:13:54.951 24903.680 - 25022.836: 99.5760% ( 3) 00:13:54.951 25022.836 - 25141.993: 99.5996% ( 3) 00:13:54.951 25141.993 - 25261.149: 99.6231% ( 3) 00:13:54.951 25261.149 - 25380.305: 99.6388% ( 2) 00:13:54.951 25380.305 - 25499.462: 99.6624% ( 3) 00:13:54.951 25499.462 - 25618.618: 99.6859% ( 3) 00:13:54.951 25618.618 - 25737.775: 99.7095% ( 3) 00:13:54.951 25737.775 - 25856.931: 99.7330% ( 3) 00:13:54.951 25856.931 - 25976.087: 99.7566% ( 3) 00:13:54.951 25976.087 - 26095.244: 99.7802% ( 3) 00:13:54.951 26095.244 - 26214.400: 99.8037% ( 3) 00:13:54.951 26214.400 - 26333.556: 99.8273% ( 3) 00:13:54.951 26333.556 - 26452.713: 99.8508% ( 3) 00:13:54.951 26452.713 - 26571.869: 99.8665% ( 2) 00:13:54.951 26571.869 - 26691.025: 99.8901% ( 3) 00:13:54.951 26691.025 - 26810.182: 99.9136% ( 3) 00:13:54.951 26810.182 - 26929.338: 99.9372% ( 3) 00:13:54.951 26929.338 - 27048.495: 99.9529% ( 2) 00:13:54.951 27048.495 - 27167.651: 99.9764% ( 3) 00:13:54.951 27167.651 - 27286.807: 100.0000% ( 3) 00:13:54.951 00:13:54.951 13:08:41 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:13:56.328 Initializing NVMe Controllers 00:13:56.328 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:56.328 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:56.328 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:56.328 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:56.328 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:56.328 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:56.328 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:56.328 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:56.328 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:56.328 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:56.328 Initialization complete. Launching workers. 00:13:56.328 ======================================================== 00:13:56.328 Latency(us) 00:13:56.328 Device Information : IOPS MiB/s Average min max 00:13:56.328 PCIE (0000:00:10.0) NSID 1 from core 0: 11637.63 136.38 11022.01 8862.21 50722.34 00:13:56.328 PCIE (0000:00:11.0) NSID 1 from core 0: 11637.63 136.38 10993.07 8804.26 47859.56 00:13:56.328 PCIE (0000:00:13.0) NSID 1 from core 0: 11637.63 136.38 10964.19 8974.91 45273.30 00:13:56.328 PCIE (0000:00:12.0) NSID 1 from core 0: 11637.63 136.38 10935.15 8863.82 42264.98 00:13:56.328 PCIE (0000:00:12.0) NSID 2 from core 0: 11637.63 136.38 10906.02 8914.70 39296.50 00:13:56.328 PCIE (0000:00:12.0) NSID 3 from core 0: 11637.63 136.38 10877.27 8821.49 36132.26 00:13:56.328 ======================================================== 00:13:56.328 Total : 69825.79 818.27 10949.62 8804.26 50722.34 00:13:56.328 00:13:56.328 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:56.328 ================================================================================= 00:13:56.328 1.00000% : 9234.618us 00:13:56.329 10.00000% : 9830.400us 00:13:56.329 25.00000% : 10187.869us 00:13:56.329 50.00000% : 10604.916us 00:13:56.329 75.00000% : 11141.120us 00:13:56.329 90.00000% : 11677.324us 00:13:56.329 95.00000% : 12094.371us 00:13:56.329 98.00000% : 13047.622us 00:13:56.329 99.00000% : 37653.411us 00:13:56.329 99.50000% : 47900.858us 00:13:56.329 99.90000% : 50283.985us 00:13:56.329 99.99000% : 50760.611us 00:13:56.329 99.99900% : 50760.611us 00:13:56.329 99.99990% : 50760.611us 00:13:56.329 99.99999% : 50760.611us 00:13:56.329 00:13:56.329 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:56.329 ================================================================================= 00:13:56.329 1.00000% : 9413.353us 00:13:56.329 10.00000% : 9949.556us 00:13:56.329 25.00000% : 10247.447us 00:13:56.329 50.00000% : 10604.916us 00:13:56.329 75.00000% : 11081.542us 00:13:56.329 90.00000% : 11558.167us 00:13:56.329 95.00000% : 11975.215us 00:13:56.329 98.00000% : 13047.622us 00:13:56.329 99.00000% : 35985.222us 00:13:56.329 99.50000% : 45279.418us 00:13:56.329 99.90000% : 47424.233us 00:13:56.329 99.99000% : 47900.858us 00:13:56.329 99.99900% : 47900.858us 00:13:56.329 99.99990% : 47900.858us 00:13:56.329 99.99999% : 47900.858us 00:13:56.329 00:13:56.329 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:56.329 ================================================================================= 00:13:56.329 1.00000% : 9472.931us 00:13:56.329 10.00000% : 9949.556us 00:13:56.329 25.00000% : 10307.025us 00:13:56.329 50.00000% : 10604.916us 00:13:56.329 75.00000% : 11021.964us 00:13:56.329 90.00000% : 11498.589us 00:13:56.329 95.00000% : 11975.215us 00:13:56.329 98.00000% : 12868.887us 00:13:56.329 99.00000% : 33840.407us 00:13:56.329 99.50000% : 43134.604us 00:13:56.329 99.90000% : 45041.105us 00:13:56.329 99.99000% : 45279.418us 00:13:56.329 99.99900% : 45279.418us 00:13:56.329 99.99990% : 45279.418us 00:13:56.329 99.99999% : 45279.418us 00:13:56.329 00:13:56.329 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:56.329 ================================================================================= 00:13:56.329 1.00000% : 9472.931us 00:13:56.329 10.00000% : 9949.556us 00:13:56.329 25.00000% : 10247.447us 00:13:56.329 50.00000% : 10604.916us 00:13:56.329 75.00000% : 11021.964us 00:13:56.329 90.00000% : 11558.167us 00:13:56.329 95.00000% : 12034.793us 00:13:56.329 98.00000% : 12809.309us 00:13:56.329 99.00000% : 31457.280us 00:13:56.329 99.50000% : 40036.538us 00:13:56.329 99.90000% : 41943.040us 00:13:56.329 99.99000% : 42419.665us 00:13:56.329 99.99900% : 42419.665us 00:13:56.329 99.99990% : 42419.665us 00:13:56.329 99.99999% : 42419.665us 00:13:56.329 00:13:56.329 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:56.329 ================================================================================= 00:13:56.329 1.00000% : 9413.353us 00:13:56.329 10.00000% : 9949.556us 00:13:56.329 25.00000% : 10307.025us 00:13:56.329 50.00000% : 10604.916us 00:13:56.329 75.00000% : 11021.964us 00:13:56.329 90.00000% : 11498.589us 00:13:56.329 95.00000% : 12034.793us 00:13:56.329 98.00000% : 12988.044us 00:13:56.329 99.00000% : 28597.527us 00:13:56.329 99.50000% : 36938.473us 00:13:56.329 99.90000% : 38844.975us 00:13:56.329 99.99000% : 39321.600us 00:13:56.329 99.99900% : 39321.600us 00:13:56.329 99.99990% : 39321.600us 00:13:56.329 99.99999% : 39321.600us 00:13:56.329 00:13:56.329 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:56.329 ================================================================================= 00:13:56.329 1.00000% : 9413.353us 00:13:56.329 10.00000% : 9949.556us 00:13:56.329 25.00000% : 10247.447us 00:13:56.329 50.00000% : 10604.916us 00:13:56.329 75.00000% : 11021.964us 00:13:56.329 90.00000% : 11498.589us 00:13:56.329 95.00000% : 12034.793us 00:13:56.329 98.00000% : 12749.731us 00:13:56.329 99.00000% : 26571.869us 00:13:56.329 99.50000% : 32410.531us 00:13:56.329 99.90000% : 35746.909us 00:13:56.329 99.99000% : 36223.535us 00:13:56.329 99.99900% : 36223.535us 00:13:56.329 99.99990% : 36223.535us 00:13:56.329 99.99999% : 36223.535us 00:13:56.329 00:13:56.329 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:56.329 ============================================================================== 00:13:56.329 Range in us Cumulative IO count 00:13:56.329 8817.571 - 8877.149: 0.0172% ( 2) 00:13:56.329 8877.149 - 8936.727: 0.1374% ( 14) 00:13:56.329 8936.727 - 8996.305: 0.2833% ( 17) 00:13:56.329 8996.305 - 9055.884: 0.3606% ( 9) 00:13:56.329 9055.884 - 9115.462: 0.4808% ( 14) 00:13:56.329 9115.462 - 9175.040: 0.6095% ( 15) 00:13:56.329 9175.040 - 9234.618: 1.1247% ( 60) 00:13:56.329 9234.618 - 9294.196: 1.5625% ( 51) 00:13:56.329 9294.196 - 9353.775: 2.0433% ( 56) 00:13:56.329 9353.775 - 9413.353: 2.5155% ( 55) 00:13:56.329 9413.353 - 9472.931: 3.3654% ( 99) 00:13:56.329 9472.931 - 9532.509: 4.2153% ( 99) 00:13:56.329 9532.509 - 9592.087: 5.2713% ( 123) 00:13:56.329 9592.087 - 9651.665: 6.4818% ( 141) 00:13:56.329 9651.665 - 9711.244: 8.0100% ( 178) 00:13:56.329 9711.244 - 9770.822: 9.8128% ( 210) 00:13:56.329 9770.822 - 9830.400: 11.6329% ( 212) 00:13:56.329 9830.400 - 9889.978: 13.8307% ( 256) 00:13:56.329 9889.978 - 9949.556: 15.8310% ( 233) 00:13:56.329 9949.556 - 10009.135: 18.0117% ( 254) 00:13:56.329 10009.135 - 10068.713: 20.7332% ( 317) 00:13:56.329 10068.713 - 10128.291: 23.6006% ( 334) 00:13:56.329 10128.291 - 10187.869: 26.9145% ( 386) 00:13:56.329 10187.869 - 10247.447: 30.5288% ( 421) 00:13:56.329 10247.447 - 10307.025: 34.1003% ( 416) 00:13:56.329 10307.025 - 10366.604: 37.7661% ( 427) 00:13:56.329 10366.604 - 10426.182: 41.0457% ( 382) 00:13:56.329 10426.182 - 10485.760: 44.4025% ( 391) 00:13:56.329 10485.760 - 10545.338: 47.4845% ( 359) 00:13:56.329 10545.338 - 10604.916: 50.3005% ( 328) 00:13:56.329 10604.916 - 10664.495: 53.6487% ( 390) 00:13:56.329 10664.495 - 10724.073: 56.5076% ( 333) 00:13:56.329 10724.073 - 10783.651: 59.2548% ( 320) 00:13:56.329 10783.651 - 10843.229: 62.6803% ( 399) 00:13:56.329 10843.229 - 10902.807: 65.9341% ( 379) 00:13:56.329 10902.807 - 10962.385: 68.9045% ( 346) 00:13:56.329 10962.385 - 11021.964: 71.5573% ( 309) 00:13:56.329 11021.964 - 11081.542: 74.3905% ( 330) 00:13:56.329 11081.542 - 11141.120: 76.7342% ( 273) 00:13:56.329 11141.120 - 11200.698: 78.7260% ( 232) 00:13:56.329 11200.698 - 11260.276: 80.6920% ( 229) 00:13:56.329 11260.276 - 11319.855: 82.6065% ( 223) 00:13:56.329 11319.855 - 11379.433: 84.2291% ( 189) 00:13:56.329 11379.433 - 11439.011: 85.5855% ( 158) 00:13:56.329 11439.011 - 11498.589: 86.9505% ( 159) 00:13:56.329 11498.589 - 11558.167: 88.2727% ( 154) 00:13:56.329 11558.167 - 11617.745: 89.5175% ( 145) 00:13:56.329 11617.745 - 11677.324: 90.6765% ( 135) 00:13:56.329 11677.324 - 11736.902: 91.7411% ( 124) 00:13:56.329 11736.902 - 11796.480: 92.5481% ( 94) 00:13:56.329 11796.480 - 11856.058: 93.3036% ( 88) 00:13:56.329 11856.058 - 11915.636: 93.8702% ( 66) 00:13:56.329 11915.636 - 11975.215: 94.4454% ( 67) 00:13:56.329 11975.215 - 12034.793: 94.9948% ( 64) 00:13:56.329 12034.793 - 12094.371: 95.5958% ( 70) 00:13:56.329 12094.371 - 12153.949: 95.9306% ( 39) 00:13:56.329 12153.949 - 12213.527: 96.2311% ( 35) 00:13:56.329 12213.527 - 12273.105: 96.5230% ( 34) 00:13:56.329 12273.105 - 12332.684: 96.7806% ( 30) 00:13:56.329 12332.684 - 12392.262: 96.9523% ( 20) 00:13:56.329 12392.262 - 12451.840: 97.1068% ( 18) 00:13:56.329 12451.840 - 12511.418: 97.2699% ( 19) 00:13:56.329 12511.418 - 12570.996: 97.4760% ( 24) 00:13:56.329 12570.996 - 12630.575: 97.6047% ( 15) 00:13:56.329 12630.575 - 12690.153: 97.6906% ( 10) 00:13:56.329 12690.153 - 12749.731: 97.7507% ( 7) 00:13:56.329 12749.731 - 12809.309: 97.8108% ( 7) 00:13:56.329 12809.309 - 12868.887: 97.8709% ( 7) 00:13:56.329 12868.887 - 12928.465: 97.9310% ( 7) 00:13:56.329 12928.465 - 12988.044: 97.9911% ( 7) 00:13:56.329 12988.044 - 13047.622: 98.0426% ( 6) 00:13:56.329 13047.622 - 13107.200: 98.0941% ( 6) 00:13:56.329 13107.200 - 13166.778: 98.1198% ( 3) 00:13:56.329 13166.778 - 13226.356: 98.1542% ( 4) 00:13:56.329 13226.356 - 13285.935: 98.1885% ( 4) 00:13:56.329 13285.935 - 13345.513: 98.2229% ( 4) 00:13:56.329 13345.513 - 13405.091: 98.2572% ( 4) 00:13:56.329 13405.091 - 13464.669: 98.2916% ( 4) 00:13:56.329 13464.669 - 13524.247: 98.3345% ( 5) 00:13:56.329 13524.247 - 13583.825: 98.3516% ( 2) 00:13:56.329 13583.825 - 13643.404: 98.4032% ( 6) 00:13:56.329 13643.404 - 13702.982: 98.4289% ( 3) 00:13:56.330 13702.982 - 13762.560: 98.4375% ( 1) 00:13:56.330 13762.560 - 13822.138: 98.4461% ( 1) 00:13:56.330 13822.138 - 13881.716: 98.4804% ( 4) 00:13:56.330 13881.716 - 13941.295: 98.4890% ( 1) 00:13:56.330 13941.295 - 14000.873: 98.5062% ( 2) 00:13:56.330 14000.873 - 14060.451: 98.5234% ( 2) 00:13:56.330 14060.451 - 14120.029: 98.5319% ( 1) 00:13:56.330 14120.029 - 14179.607: 98.5663% ( 4) 00:13:56.330 14179.607 - 14239.185: 98.6006% ( 4) 00:13:56.330 14239.185 - 14298.764: 98.6178% ( 2) 00:13:56.330 14298.764 - 14358.342: 98.6350% ( 2) 00:13:56.330 14358.342 - 14417.920: 98.6521% ( 2) 00:13:56.330 14417.920 - 14477.498: 98.6607% ( 1) 00:13:56.330 14477.498 - 14537.076: 98.6693% ( 1) 00:13:56.330 14537.076 - 14596.655: 98.6865% ( 2) 00:13:56.330 14715.811 - 14775.389: 98.7036% ( 2) 00:13:56.330 14775.389 - 14834.967: 98.7122% ( 1) 00:13:56.330 14834.967 - 14894.545: 98.7294% ( 2) 00:13:56.330 14894.545 - 14954.124: 98.7466% ( 2) 00:13:56.330 14954.124 - 15013.702: 98.7552% ( 1) 00:13:56.330 15013.702 - 15073.280: 98.7723% ( 2) 00:13:56.330 15073.280 - 15132.858: 98.7809% ( 1) 00:13:56.330 15132.858 - 15192.436: 98.8067% ( 3) 00:13:56.330 15192.436 - 15252.015: 98.8152% ( 1) 00:13:56.330 15252.015 - 15371.171: 98.8496% ( 4) 00:13:56.330 15371.171 - 15490.327: 98.8753% ( 3) 00:13:56.330 15490.327 - 15609.484: 98.8925% ( 2) 00:13:56.330 15609.484 - 15728.640: 98.9011% ( 1) 00:13:56.330 36938.473 - 37176.785: 98.9354% ( 4) 00:13:56.330 37176.785 - 37415.098: 98.9698% ( 4) 00:13:56.330 37415.098 - 37653.411: 99.0041% ( 4) 00:13:56.330 37653.411 - 37891.724: 99.0470% ( 5) 00:13:56.330 37891.724 - 38130.036: 99.0814% ( 4) 00:13:56.330 38130.036 - 38368.349: 99.1243% ( 5) 00:13:56.330 38368.349 - 38606.662: 99.1587% ( 4) 00:13:56.330 38606.662 - 38844.975: 99.1844% ( 3) 00:13:56.330 38844.975 - 39083.287: 99.2273% ( 5) 00:13:56.330 39083.287 - 39321.600: 99.2703% ( 5) 00:13:56.330 39321.600 - 39559.913: 99.3132% ( 5) 00:13:56.330 39559.913 - 39798.225: 99.3389% ( 3) 00:13:56.330 39798.225 - 40036.538: 99.3905% ( 6) 00:13:56.330 40036.538 - 40274.851: 99.4334% ( 5) 00:13:56.330 40274.851 - 40513.164: 99.4505% ( 2) 00:13:56.330 47424.233 - 47662.545: 99.4763% ( 3) 00:13:56.330 47662.545 - 47900.858: 99.5106% ( 4) 00:13:56.330 47900.858 - 48139.171: 99.5536% ( 5) 00:13:56.330 48139.171 - 48377.484: 99.5965% ( 5) 00:13:56.330 48377.484 - 48615.796: 99.6308% ( 4) 00:13:56.330 48615.796 - 48854.109: 99.6738% ( 5) 00:13:56.330 48854.109 - 49092.422: 99.7253% ( 6) 00:13:56.330 49092.422 - 49330.735: 99.7510% ( 3) 00:13:56.330 49330.735 - 49569.047: 99.8025% ( 6) 00:13:56.330 49569.047 - 49807.360: 99.8369% ( 4) 00:13:56.330 49807.360 - 50045.673: 99.8798% ( 5) 00:13:56.330 50045.673 - 50283.985: 99.9313% ( 6) 00:13:56.330 50283.985 - 50522.298: 99.9657% ( 4) 00:13:56.330 50522.298 - 50760.611: 100.0000% ( 4) 00:13:56.330 00:13:56.330 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:56.330 ============================================================================== 00:13:56.330 Range in us Cumulative IO count 00:13:56.330 8757.993 - 8817.571: 0.0086% ( 1) 00:13:56.330 8996.305 - 9055.884: 0.0172% ( 1) 00:13:56.330 9055.884 - 9115.462: 0.0859% ( 8) 00:13:56.330 9115.462 - 9175.040: 0.2318% ( 17) 00:13:56.330 9175.040 - 9234.618: 0.4121% ( 21) 00:13:56.330 9234.618 - 9294.196: 0.6181% ( 24) 00:13:56.330 9294.196 - 9353.775: 0.9100% ( 34) 00:13:56.330 9353.775 - 9413.353: 1.2964% ( 45) 00:13:56.330 9413.353 - 9472.931: 1.7256% ( 50) 00:13:56.330 9472.931 - 9532.509: 2.4296% ( 82) 00:13:56.330 9532.509 - 9592.087: 3.1078% ( 79) 00:13:56.330 9592.087 - 9651.665: 3.8805% ( 90) 00:13:56.330 9651.665 - 9711.244: 5.0223% ( 133) 00:13:56.330 9711.244 - 9770.822: 6.3530% ( 155) 00:13:56.330 9770.822 - 9830.400: 7.8640% ( 176) 00:13:56.330 9830.400 - 9889.978: 9.3578% ( 174) 00:13:56.330 9889.978 - 9949.556: 11.4269% ( 241) 00:13:56.330 9949.556 - 10009.135: 13.7191% ( 267) 00:13:56.330 10009.135 - 10068.713: 16.0027% ( 266) 00:13:56.330 10068.713 - 10128.291: 18.5783% ( 300) 00:13:56.330 10128.291 - 10187.869: 21.7548% ( 370) 00:13:56.330 10187.869 - 10247.447: 25.3692% ( 421) 00:13:56.330 10247.447 - 10307.025: 28.7861% ( 398) 00:13:56.330 10307.025 - 10366.604: 33.4993% ( 549) 00:13:56.330 10366.604 - 10426.182: 37.9207% ( 515) 00:13:56.330 10426.182 - 10485.760: 42.3420% ( 515) 00:13:56.330 10485.760 - 10545.338: 46.9694% ( 539) 00:13:56.330 10545.338 - 10604.916: 51.8286% ( 566) 00:13:56.330 10604.916 - 10664.495: 55.7263% ( 454) 00:13:56.330 10664.495 - 10724.073: 59.5124% ( 441) 00:13:56.330 10724.073 - 10783.651: 63.4873% ( 463) 00:13:56.330 10783.651 - 10843.229: 66.5007% ( 351) 00:13:56.330 10843.229 - 10902.807: 69.5656% ( 357) 00:13:56.330 10902.807 - 10962.385: 72.1240% ( 298) 00:13:56.330 10962.385 - 11021.964: 74.9742% ( 332) 00:13:56.330 11021.964 - 11081.542: 77.8331% ( 333) 00:13:56.330 11081.542 - 11141.120: 80.2284% ( 279) 00:13:56.330 11141.120 - 11200.698: 82.4176% ( 255) 00:13:56.330 11200.698 - 11260.276: 84.2548% ( 214) 00:13:56.330 11260.276 - 11319.855: 86.1951% ( 226) 00:13:56.330 11319.855 - 11379.433: 87.7833% ( 185) 00:13:56.330 11379.433 - 11439.011: 89.0539% ( 148) 00:13:56.330 11439.011 - 11498.589: 89.9038% ( 99) 00:13:56.330 11498.589 - 11558.167: 90.6508% ( 87) 00:13:56.330 11558.167 - 11617.745: 91.3032% ( 76) 00:13:56.330 11617.745 - 11677.324: 91.8956% ( 69) 00:13:56.330 11677.324 - 11736.902: 92.5738% ( 79) 00:13:56.330 11736.902 - 11796.480: 93.3207% ( 87) 00:13:56.330 11796.480 - 11856.058: 94.0934% ( 90) 00:13:56.330 11856.058 - 11915.636: 94.7545% ( 77) 00:13:56.330 11915.636 - 11975.215: 95.3125% ( 65) 00:13:56.330 11975.215 - 12034.793: 95.8705% ( 65) 00:13:56.330 12034.793 - 12094.371: 96.4372% ( 66) 00:13:56.330 12094.371 - 12153.949: 96.6861% ( 29) 00:13:56.330 12153.949 - 12213.527: 96.9265% ( 28) 00:13:56.330 12213.527 - 12273.105: 97.2098% ( 33) 00:13:56.330 12273.105 - 12332.684: 97.4073% ( 23) 00:13:56.330 12332.684 - 12392.262: 97.5618% ( 18) 00:13:56.330 12392.262 - 12451.840: 97.6477% ( 10) 00:13:56.330 12451.840 - 12511.418: 97.7163% ( 8) 00:13:56.330 12511.418 - 12570.996: 97.7679% ( 6) 00:13:56.330 12570.996 - 12630.575: 97.8280% ( 7) 00:13:56.330 12630.575 - 12690.153: 97.8623% ( 4) 00:13:56.330 12690.153 - 12749.731: 97.9052% ( 5) 00:13:56.330 12749.731 - 12809.309: 97.9310% ( 3) 00:13:56.330 12809.309 - 12868.887: 97.9481% ( 2) 00:13:56.330 12868.887 - 12928.465: 97.9739% ( 3) 00:13:56.330 12928.465 - 12988.044: 97.9911% ( 2) 00:13:56.330 12988.044 - 13047.622: 98.0254% ( 4) 00:13:56.330 13047.622 - 13107.200: 98.0683% ( 5) 00:13:56.330 13107.200 - 13166.778: 98.1370% ( 8) 00:13:56.330 13166.778 - 13226.356: 98.1971% ( 7) 00:13:56.330 13226.356 - 13285.935: 98.3087% ( 13) 00:13:56.330 13285.935 - 13345.513: 98.4804% ( 20) 00:13:56.330 13345.513 - 13405.091: 98.6006% ( 14) 00:13:56.330 13405.091 - 13464.669: 98.6865% ( 10) 00:13:56.330 13464.669 - 13524.247: 98.7122% ( 3) 00:13:56.330 13524.247 - 13583.825: 98.7380% ( 3) 00:13:56.330 13583.825 - 13643.404: 98.7637% ( 3) 00:13:56.330 13643.404 - 13702.982: 98.7809% ( 2) 00:13:56.330 13702.982 - 13762.560: 98.7895% ( 1) 00:13:56.330 13762.560 - 13822.138: 98.8152% ( 3) 00:13:56.330 13822.138 - 13881.716: 98.8324% ( 2) 00:13:56.330 13881.716 - 13941.295: 98.8496% ( 2) 00:13:56.330 13941.295 - 14000.873: 98.8668% ( 2) 00:13:56.330 14000.873 - 14060.451: 98.8839% ( 2) 00:13:56.330 14060.451 - 14120.029: 98.8925% ( 1) 00:13:56.330 14120.029 - 14179.607: 98.9011% ( 1) 00:13:56.330 35270.284 - 35508.596: 98.9354% ( 4) 00:13:56.330 35508.596 - 35746.909: 98.9870% ( 6) 00:13:56.330 35746.909 - 35985.222: 99.0299% ( 5) 00:13:56.330 35985.222 - 36223.535: 99.0728% ( 5) 00:13:56.330 36223.535 - 36461.847: 99.1157% ( 5) 00:13:56.330 36461.847 - 36700.160: 99.1587% ( 5) 00:13:56.330 36700.160 - 36938.473: 99.1930% ( 4) 00:13:56.330 36938.473 - 37176.785: 99.2359% ( 5) 00:13:56.330 37176.785 - 37415.098: 99.2703% ( 4) 00:13:56.330 37415.098 - 37653.411: 99.3218% ( 6) 00:13:56.330 37653.411 - 37891.724: 99.3647% ( 5) 00:13:56.330 37891.724 - 38130.036: 99.4076% ( 5) 00:13:56.330 38130.036 - 38368.349: 99.4505% ( 5) 00:13:56.330 44802.793 - 45041.105: 99.4677% ( 2) 00:13:56.330 45041.105 - 45279.418: 99.5106% ( 5) 00:13:56.330 45279.418 - 45517.731: 99.5622% ( 6) 00:13:56.330 45517.731 - 45756.044: 99.6051% ( 5) 00:13:56.330 45756.044 - 45994.356: 99.6480% ( 5) 00:13:56.330 45994.356 - 46232.669: 99.6909% ( 5) 00:13:56.330 46232.669 - 46470.982: 99.7424% ( 6) 00:13:56.330 46470.982 - 46709.295: 99.7854% ( 5) 00:13:56.330 46709.295 - 46947.607: 99.8283% ( 5) 00:13:56.330 46947.607 - 47185.920: 99.8712% ( 5) 00:13:56.330 47185.920 - 47424.233: 99.9141% ( 5) 00:13:56.330 47424.233 - 47662.545: 99.9571% ( 5) 00:13:56.330 47662.545 - 47900.858: 100.0000% ( 5) 00:13:56.330 00:13:56.330 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:56.330 ============================================================================== 00:13:56.330 Range in us Cumulative IO count 00:13:56.330 8936.727 - 8996.305: 0.0172% ( 2) 00:13:56.330 8996.305 - 9055.884: 0.0515% ( 4) 00:13:56.330 9055.884 - 9115.462: 0.0773% ( 3) 00:13:56.330 9115.462 - 9175.040: 0.1288% ( 6) 00:13:56.330 9175.040 - 9234.618: 0.2060% ( 9) 00:13:56.330 9234.618 - 9294.196: 0.4035% ( 23) 00:13:56.330 9294.196 - 9353.775: 0.5580% ( 18) 00:13:56.330 9353.775 - 9413.353: 0.8929% ( 39) 00:13:56.331 9413.353 - 9472.931: 1.3908% ( 58) 00:13:56.331 9472.931 - 9532.509: 2.0175% ( 73) 00:13:56.331 9532.509 - 9592.087: 2.8073% ( 92) 00:13:56.331 9592.087 - 9651.665: 3.7946% ( 115) 00:13:56.331 9651.665 - 9711.244: 5.1253% ( 155) 00:13:56.331 9711.244 - 9770.822: 6.6020% ( 172) 00:13:56.331 9770.822 - 9830.400: 7.9155% ( 153) 00:13:56.331 9830.400 - 9889.978: 9.3407% ( 166) 00:13:56.331 9889.978 - 9949.556: 11.2294% ( 220) 00:13:56.331 9949.556 - 10009.135: 13.4186% ( 255) 00:13:56.331 10009.135 - 10068.713: 15.4705% ( 239) 00:13:56.331 10068.713 - 10128.291: 17.9945% ( 294) 00:13:56.331 10128.291 - 10187.869: 21.0165% ( 352) 00:13:56.331 10187.869 - 10247.447: 24.6051% ( 418) 00:13:56.331 10247.447 - 10307.025: 28.5285% ( 457) 00:13:56.331 10307.025 - 10366.604: 32.7867% ( 496) 00:13:56.331 10366.604 - 10426.182: 37.2510% ( 520) 00:13:56.331 10426.182 - 10485.760: 42.2476% ( 582) 00:13:56.331 10485.760 - 10545.338: 46.5831% ( 505) 00:13:56.331 10545.338 - 10604.916: 51.5711% ( 581) 00:13:56.331 10604.916 - 10664.495: 56.0783% ( 525) 00:13:56.331 10664.495 - 10724.073: 60.2764% ( 489) 00:13:56.331 10724.073 - 10783.651: 63.6161% ( 389) 00:13:56.331 10783.651 - 10843.229: 67.3163% ( 431) 00:13:56.331 10843.229 - 10902.807: 70.3640% ( 355) 00:13:56.331 10902.807 - 10962.385: 73.4976% ( 365) 00:13:56.331 10962.385 - 11021.964: 76.4938% ( 349) 00:13:56.331 11021.964 - 11081.542: 79.0264% ( 295) 00:13:56.331 11081.542 - 11141.120: 81.3874% ( 275) 00:13:56.331 11141.120 - 11200.698: 83.5337% ( 250) 00:13:56.331 11200.698 - 11260.276: 85.3280% ( 209) 00:13:56.331 11260.276 - 11319.855: 87.0278% ( 198) 00:13:56.331 11319.855 - 11379.433: 88.1353% ( 129) 00:13:56.331 11379.433 - 11439.011: 89.2428% ( 129) 00:13:56.331 11439.011 - 11498.589: 90.2902% ( 122) 00:13:56.331 11498.589 - 11558.167: 91.1745% ( 103) 00:13:56.331 11558.167 - 11617.745: 92.0158% ( 98) 00:13:56.331 11617.745 - 11677.324: 92.7885% ( 90) 00:13:56.331 11677.324 - 11736.902: 93.3980% ( 71) 00:13:56.331 11736.902 - 11796.480: 93.8616% ( 54) 00:13:56.331 11796.480 - 11856.058: 94.5227% ( 77) 00:13:56.331 11856.058 - 11915.636: 94.8146% ( 34) 00:13:56.331 11915.636 - 11975.215: 95.0979% ( 33) 00:13:56.331 11975.215 - 12034.793: 95.4842% ( 45) 00:13:56.331 12034.793 - 12094.371: 95.8362% ( 41) 00:13:56.331 12094.371 - 12153.949: 96.1796% ( 40) 00:13:56.331 12153.949 - 12213.527: 96.5831% ( 47) 00:13:56.331 12213.527 - 12273.105: 96.8063% ( 26) 00:13:56.331 12273.105 - 12332.684: 97.2613% ( 53) 00:13:56.331 12332.684 - 12392.262: 97.4502% ( 22) 00:13:56.331 12392.262 - 12451.840: 97.5532% ( 12) 00:13:56.331 12451.840 - 12511.418: 97.6820% ( 15) 00:13:56.331 12511.418 - 12570.996: 97.7764% ( 11) 00:13:56.331 12570.996 - 12630.575: 97.8537% ( 9) 00:13:56.331 12630.575 - 12690.153: 97.9052% ( 6) 00:13:56.331 12690.153 - 12749.731: 97.9653% ( 7) 00:13:56.331 12749.731 - 12809.309: 97.9997% ( 4) 00:13:56.331 12809.309 - 12868.887: 98.0340% ( 4) 00:13:56.331 12868.887 - 12928.465: 98.0598% ( 3) 00:13:56.331 12928.465 - 12988.044: 98.0769% ( 2) 00:13:56.331 12988.044 - 13047.622: 98.1628% ( 10) 00:13:56.331 13047.622 - 13107.200: 98.2400% ( 9) 00:13:56.331 13107.200 - 13166.778: 98.3602% ( 14) 00:13:56.331 13166.778 - 13226.356: 98.4804% ( 14) 00:13:56.331 13226.356 - 13285.935: 98.5577% ( 9) 00:13:56.331 13285.935 - 13345.513: 98.6264% ( 8) 00:13:56.331 13345.513 - 13405.091: 98.7466% ( 14) 00:13:56.331 13405.091 - 13464.669: 98.7637% ( 2) 00:13:56.331 13464.669 - 13524.247: 98.7809% ( 2) 00:13:56.331 13524.247 - 13583.825: 98.7981% ( 2) 00:13:56.331 13583.825 - 13643.404: 98.8152% ( 2) 00:13:56.331 13643.404 - 13702.982: 98.8238% ( 1) 00:13:56.331 13702.982 - 13762.560: 98.8410% ( 2) 00:13:56.331 13762.560 - 13822.138: 98.8582% ( 2) 00:13:56.331 13822.138 - 13881.716: 98.8753% ( 2) 00:13:56.331 13881.716 - 13941.295: 98.8925% ( 2) 00:13:56.331 13941.295 - 14000.873: 98.9011% ( 1) 00:13:56.331 33125.469 - 33363.782: 98.9269% ( 3) 00:13:56.331 33363.782 - 33602.095: 98.9698% ( 5) 00:13:56.331 33602.095 - 33840.407: 99.0041% ( 4) 00:13:56.331 33840.407 - 34078.720: 99.0470% ( 5) 00:13:56.331 34078.720 - 34317.033: 99.0900% ( 5) 00:13:56.331 34317.033 - 34555.345: 99.1329% ( 5) 00:13:56.331 34555.345 - 34793.658: 99.1758% ( 5) 00:13:56.331 34793.658 - 35031.971: 99.2188% ( 5) 00:13:56.331 35031.971 - 35270.284: 99.2617% ( 5) 00:13:56.331 35270.284 - 35508.596: 99.2960% ( 4) 00:13:56.331 35508.596 - 35746.909: 99.3389% ( 5) 00:13:56.331 35746.909 - 35985.222: 99.3819% ( 5) 00:13:56.331 35985.222 - 36223.535: 99.4334% ( 6) 00:13:56.331 36223.535 - 36461.847: 99.4505% ( 2) 00:13:56.331 42657.978 - 42896.291: 99.4935% ( 5) 00:13:56.331 42896.291 - 43134.604: 99.5450% ( 6) 00:13:56.331 43134.604 - 43372.916: 99.5879% ( 5) 00:13:56.331 43372.916 - 43611.229: 99.6394% ( 6) 00:13:56.331 43611.229 - 43849.542: 99.6909% ( 6) 00:13:56.331 43849.542 - 44087.855: 99.7339% ( 5) 00:13:56.331 44087.855 - 44326.167: 99.7854% ( 6) 00:13:56.331 44326.167 - 44564.480: 99.8369% ( 6) 00:13:56.331 44564.480 - 44802.793: 99.8884% ( 6) 00:13:56.331 44802.793 - 45041.105: 99.9399% ( 6) 00:13:56.331 45041.105 - 45279.418: 100.0000% ( 7) 00:13:56.331 00:13:56.331 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:56.331 ============================================================================== 00:13:56.331 Range in us Cumulative IO count 00:13:56.331 8817.571 - 8877.149: 0.0086% ( 1) 00:13:56.331 8936.727 - 8996.305: 0.0172% ( 1) 00:13:56.331 9055.884 - 9115.462: 0.0429% ( 3) 00:13:56.331 9115.462 - 9175.040: 0.0944% ( 6) 00:13:56.331 9175.040 - 9234.618: 0.1889% ( 11) 00:13:56.331 9234.618 - 9294.196: 0.3692% ( 21) 00:13:56.331 9294.196 - 9353.775: 0.5495% ( 21) 00:13:56.331 9353.775 - 9413.353: 0.8843% ( 39) 00:13:56.331 9413.353 - 9472.931: 1.3049% ( 49) 00:13:56.331 9472.931 - 9532.509: 1.8887% ( 68) 00:13:56.331 9532.509 - 9592.087: 2.7902% ( 105) 00:13:56.331 9592.087 - 9651.665: 4.1552% ( 159) 00:13:56.331 9651.665 - 9711.244: 5.4344% ( 149) 00:13:56.331 9711.244 - 9770.822: 6.6878% ( 146) 00:13:56.331 9770.822 - 9830.400: 8.1302% ( 168) 00:13:56.331 9830.400 - 9889.978: 9.3578% ( 143) 00:13:56.331 9889.978 - 9949.556: 10.8345% ( 172) 00:13:56.331 9949.556 - 10009.135: 12.9722% ( 249) 00:13:56.331 10009.135 - 10068.713: 15.0412% ( 241) 00:13:56.331 10068.713 - 10128.291: 17.8400% ( 326) 00:13:56.331 10128.291 - 10187.869: 21.4114% ( 416) 00:13:56.331 10187.869 - 10247.447: 25.1202% ( 432) 00:13:56.331 10247.447 - 10307.025: 30.1511% ( 586) 00:13:56.331 10307.025 - 10366.604: 34.5810% ( 516) 00:13:56.331 10366.604 - 10426.182: 38.5560% ( 463) 00:13:56.331 10426.182 - 10485.760: 43.4667% ( 572) 00:13:56.331 10485.760 - 10545.338: 48.4375% ( 579) 00:13:56.331 10545.338 - 10604.916: 52.6185% ( 487) 00:13:56.331 10604.916 - 10664.495: 56.5591% ( 459) 00:13:56.331 10664.495 - 10724.073: 59.8214% ( 380) 00:13:56.331 10724.073 - 10783.651: 63.1010% ( 382) 00:13:56.331 10783.651 - 10843.229: 66.9042% ( 443) 00:13:56.331 10843.229 - 10902.807: 70.6044% ( 431) 00:13:56.331 10902.807 - 10962.385: 73.2400% ( 307) 00:13:56.331 10962.385 - 11021.964: 75.6782% ( 284) 00:13:56.331 11021.964 - 11081.542: 78.0649% ( 278) 00:13:56.331 11081.542 - 11141.120: 80.5804% ( 293) 00:13:56.331 11141.120 - 11200.698: 82.8812% ( 268) 00:13:56.331 11200.698 - 11260.276: 85.1047% ( 259) 00:13:56.331 11260.276 - 11319.855: 86.8819% ( 207) 00:13:56.331 11319.855 - 11379.433: 88.1353% ( 146) 00:13:56.331 11379.433 - 11439.011: 89.0883% ( 111) 00:13:56.331 11439.011 - 11498.589: 89.8438% ( 88) 00:13:56.331 11498.589 - 11558.167: 90.5735% ( 85) 00:13:56.331 11558.167 - 11617.745: 91.5179% ( 110) 00:13:56.331 11617.745 - 11677.324: 92.1016% ( 68) 00:13:56.331 11677.324 - 11736.902: 92.6511% ( 64) 00:13:56.331 11736.902 - 11796.480: 93.4667% ( 95) 00:13:56.331 11796.480 - 11856.058: 93.9732% ( 59) 00:13:56.331 11856.058 - 11915.636: 94.3853% ( 48) 00:13:56.331 11915.636 - 11975.215: 94.7115% ( 38) 00:13:56.331 11975.215 - 12034.793: 95.0206% ( 36) 00:13:56.331 12034.793 - 12094.371: 95.5786% ( 65) 00:13:56.331 12094.371 - 12153.949: 95.9306% ( 41) 00:13:56.331 12153.949 - 12213.527: 96.1624% ( 27) 00:13:56.331 12213.527 - 12273.105: 96.5144% ( 41) 00:13:56.331 12273.105 - 12332.684: 96.8492% ( 39) 00:13:56.331 12332.684 - 12392.262: 97.0982% ( 29) 00:13:56.331 12392.262 - 12451.840: 97.3043% ( 24) 00:13:56.331 12451.840 - 12511.418: 97.4159% ( 13) 00:13:56.331 12511.418 - 12570.996: 97.5962% ( 21) 00:13:56.331 12570.996 - 12630.575: 97.7593% ( 19) 00:13:56.331 12630.575 - 12690.153: 97.9138% ( 18) 00:13:56.331 12690.153 - 12749.731: 97.9997% ( 10) 00:13:56.331 12749.731 - 12809.309: 98.0855% ( 10) 00:13:56.331 12809.309 - 12868.887: 98.1456% ( 7) 00:13:56.331 12868.887 - 12928.465: 98.2229% ( 9) 00:13:56.331 12928.465 - 12988.044: 98.2744% ( 6) 00:13:56.331 12988.044 - 13047.622: 98.3516% ( 9) 00:13:56.331 13047.622 - 13107.200: 98.4117% ( 7) 00:13:56.331 13107.200 - 13166.778: 98.4547% ( 5) 00:13:56.331 13166.778 - 13226.356: 98.4976% ( 5) 00:13:56.331 13226.356 - 13285.935: 98.5319% ( 4) 00:13:56.331 13285.935 - 13345.513: 98.5749% ( 5) 00:13:56.331 13345.513 - 13405.091: 98.6521% ( 9) 00:13:56.331 13405.091 - 13464.669: 98.7380% ( 10) 00:13:56.331 13464.669 - 13524.247: 98.7981% ( 7) 00:13:56.331 13524.247 - 13583.825: 98.8238% ( 3) 00:13:56.331 13583.825 - 13643.404: 98.8496% ( 3) 00:13:56.331 13643.404 - 13702.982: 98.8839% ( 4) 00:13:56.331 13702.982 - 13762.560: 98.9011% ( 2) 00:13:56.331 30504.029 - 30742.342: 98.9183% ( 2) 00:13:56.332 30742.342 - 30980.655: 98.9526% ( 4) 00:13:56.332 30980.655 - 31218.967: 98.9955% ( 5) 00:13:56.332 31218.967 - 31457.280: 99.0385% ( 5) 00:13:56.332 31457.280 - 31695.593: 99.0814% ( 5) 00:13:56.332 31695.593 - 31933.905: 99.1157% ( 4) 00:13:56.332 31933.905 - 32172.218: 99.1587% ( 5) 00:13:56.332 32172.218 - 32410.531: 99.2016% ( 5) 00:13:56.332 32410.531 - 32648.844: 99.2359% ( 4) 00:13:56.332 32648.844 - 32887.156: 99.2874% ( 6) 00:13:56.332 32887.156 - 33125.469: 99.3304% ( 5) 00:13:56.332 33125.469 - 33363.782: 99.3733% ( 5) 00:13:56.332 33363.782 - 33602.095: 99.4162% ( 5) 00:13:56.332 33602.095 - 33840.407: 99.4505% ( 4) 00:13:56.332 39559.913 - 39798.225: 99.4677% ( 2) 00:13:56.332 39798.225 - 40036.538: 99.5192% ( 6) 00:13:56.332 40036.538 - 40274.851: 99.5707% ( 6) 00:13:56.332 40274.851 - 40513.164: 99.6223% ( 6) 00:13:56.332 40513.164 - 40751.476: 99.6823% ( 7) 00:13:56.332 40751.476 - 40989.789: 99.7339% ( 6) 00:13:56.332 40989.789 - 41228.102: 99.7768% ( 5) 00:13:56.332 41228.102 - 41466.415: 99.8283% ( 6) 00:13:56.332 41466.415 - 41704.727: 99.8798% ( 6) 00:13:56.332 41704.727 - 41943.040: 99.9313% ( 6) 00:13:56.332 41943.040 - 42181.353: 99.9828% ( 6) 00:13:56.332 42181.353 - 42419.665: 100.0000% ( 2) 00:13:56.332 00:13:56.332 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:56.332 ============================================================================== 00:13:56.332 Range in us Cumulative IO count 00:13:56.332 8877.149 - 8936.727: 0.0086% ( 1) 00:13:56.332 8936.727 - 8996.305: 0.0601% ( 6) 00:13:56.332 8996.305 - 9055.884: 0.1116% ( 6) 00:13:56.332 9055.884 - 9115.462: 0.1803% ( 8) 00:13:56.332 9115.462 - 9175.040: 0.3005% ( 14) 00:13:56.332 9175.040 - 9234.618: 0.4894% ( 22) 00:13:56.332 9234.618 - 9294.196: 0.6782% ( 22) 00:13:56.332 9294.196 - 9353.775: 0.9100% ( 27) 00:13:56.332 9353.775 - 9413.353: 1.2448% ( 39) 00:13:56.332 9413.353 - 9472.931: 1.9317% ( 80) 00:13:56.332 9472.931 - 9532.509: 2.3953% ( 54) 00:13:56.332 9532.509 - 9592.087: 3.1422% ( 87) 00:13:56.332 9592.087 - 9651.665: 4.0264% ( 103) 00:13:56.332 9651.665 - 9711.244: 5.1683% ( 133) 00:13:56.332 9711.244 - 9770.822: 6.4560% ( 150) 00:13:56.332 9770.822 - 9830.400: 7.8640% ( 164) 00:13:56.332 9830.400 - 9889.978: 9.6326% ( 206) 00:13:56.332 9889.978 - 9949.556: 11.2036% ( 183) 00:13:56.332 9949.556 - 10009.135: 13.2040% ( 233) 00:13:56.332 10009.135 - 10068.713: 15.4705% ( 264) 00:13:56.332 10068.713 - 10128.291: 18.6212% ( 367) 00:13:56.332 10128.291 - 10187.869: 21.7119% ( 360) 00:13:56.332 10187.869 - 10247.447: 24.8798% ( 369) 00:13:56.332 10247.447 - 10307.025: 28.5886% ( 432) 00:13:56.332 10307.025 - 10366.604: 32.5206% ( 458) 00:13:56.332 10366.604 - 10426.182: 37.1308% ( 537) 00:13:56.332 10426.182 - 10485.760: 41.6123% ( 522) 00:13:56.332 10485.760 - 10545.338: 46.7291% ( 596) 00:13:56.332 10545.338 - 10604.916: 50.6611% ( 458) 00:13:56.332 10604.916 - 10664.495: 55.1168% ( 519) 00:13:56.332 10664.495 - 10724.073: 59.6583% ( 529) 00:13:56.332 10724.073 - 10783.651: 63.4186% ( 438) 00:13:56.332 10783.651 - 10843.229: 66.9385% ( 410) 00:13:56.332 10843.229 - 10902.807: 69.7716% ( 330) 00:13:56.332 10902.807 - 10962.385: 72.9997% ( 376) 00:13:56.332 10962.385 - 11021.964: 76.0302% ( 353) 00:13:56.332 11021.964 - 11081.542: 79.5501% ( 410) 00:13:56.332 11081.542 - 11141.120: 82.5206% ( 346) 00:13:56.332 11141.120 - 11200.698: 84.5553% ( 237) 00:13:56.332 11200.698 - 11260.276: 86.3925% ( 214) 00:13:56.332 11260.276 - 11319.855: 87.6116% ( 142) 00:13:56.332 11319.855 - 11379.433: 88.7277% ( 130) 00:13:56.332 11379.433 - 11439.011: 89.7150% ( 115) 00:13:56.332 11439.011 - 11498.589: 90.4447% ( 85) 00:13:56.332 11498.589 - 11558.167: 90.9942% ( 64) 00:13:56.332 11558.167 - 11617.745: 91.6295% ( 74) 00:13:56.332 11617.745 - 11677.324: 92.2562% ( 73) 00:13:56.332 11677.324 - 11736.902: 92.7627% ( 59) 00:13:56.332 11736.902 - 11796.480: 93.4152% ( 76) 00:13:56.332 11796.480 - 11856.058: 93.9818% ( 66) 00:13:56.332 11856.058 - 11915.636: 94.4454% ( 54) 00:13:56.332 11915.636 - 11975.215: 94.8575% ( 48) 00:13:56.332 11975.215 - 12034.793: 95.3640% ( 59) 00:13:56.332 12034.793 - 12094.371: 95.8104% ( 52) 00:13:56.332 12094.371 - 12153.949: 96.1367% ( 38) 00:13:56.332 12153.949 - 12213.527: 96.4372% ( 35) 00:13:56.332 12213.527 - 12273.105: 96.7462% ( 36) 00:13:56.332 12273.105 - 12332.684: 96.9179% ( 20) 00:13:56.332 12332.684 - 12392.262: 97.0553% ( 16) 00:13:56.332 12392.262 - 12451.840: 97.1669% ( 13) 00:13:56.332 12451.840 - 12511.418: 97.2785% ( 13) 00:13:56.332 12511.418 - 12570.996: 97.3987% ( 14) 00:13:56.332 12570.996 - 12630.575: 97.4845% ( 10) 00:13:56.332 12630.575 - 12690.153: 97.5618% ( 9) 00:13:56.332 12690.153 - 12749.731: 97.6219% ( 7) 00:13:56.332 12749.731 - 12809.309: 97.6734% ( 6) 00:13:56.332 12809.309 - 12868.887: 97.7679% ( 11) 00:13:56.332 12868.887 - 12928.465: 97.9310% ( 19) 00:13:56.332 12928.465 - 12988.044: 98.0769% ( 17) 00:13:56.332 12988.044 - 13047.622: 98.1885% ( 13) 00:13:56.332 13047.622 - 13107.200: 98.2057% ( 2) 00:13:56.332 13107.200 - 13166.778: 98.2229% ( 2) 00:13:56.332 13166.778 - 13226.356: 98.2744% ( 6) 00:13:56.332 13226.356 - 13285.935: 98.3001% ( 3) 00:13:56.332 13285.935 - 13345.513: 98.3431% ( 5) 00:13:56.332 13345.513 - 13405.091: 98.4032% ( 7) 00:13:56.332 13405.091 - 13464.669: 98.4461% ( 5) 00:13:56.332 13464.669 - 13524.247: 98.4804% ( 4) 00:13:56.332 13524.247 - 13583.825: 98.5148% ( 4) 00:13:56.332 13583.825 - 13643.404: 98.5577% ( 5) 00:13:56.332 13643.404 - 13702.982: 98.5834% ( 3) 00:13:56.332 13702.982 - 13762.560: 98.6178% ( 4) 00:13:56.332 13762.560 - 13822.138: 98.6435% ( 3) 00:13:56.332 13822.138 - 13881.716: 98.7208% ( 9) 00:13:56.332 13881.716 - 13941.295: 98.7723% ( 6) 00:13:56.332 13941.295 - 14000.873: 98.8324% ( 7) 00:13:56.332 14000.873 - 14060.451: 98.8496% ( 2) 00:13:56.332 14060.451 - 14120.029: 98.8668% ( 2) 00:13:56.332 14120.029 - 14179.607: 98.8753% ( 1) 00:13:56.332 14179.607 - 14239.185: 98.8925% ( 2) 00:13:56.332 14239.185 - 14298.764: 98.9011% ( 1) 00:13:56.332 28001.745 - 28120.902: 98.9183% ( 2) 00:13:56.332 28120.902 - 28240.058: 98.9354% ( 2) 00:13:56.332 28240.058 - 28359.215: 98.9526% ( 2) 00:13:56.332 28359.215 - 28478.371: 98.9784% ( 3) 00:13:56.332 28478.371 - 28597.527: 99.0041% ( 3) 00:13:56.332 28597.527 - 28716.684: 99.0213% ( 2) 00:13:56.332 28716.684 - 28835.840: 99.0470% ( 3) 00:13:56.332 28835.840 - 28954.996: 99.0728% ( 3) 00:13:56.332 28954.996 - 29074.153: 99.0814% ( 1) 00:13:56.332 29074.153 - 29193.309: 99.1071% ( 3) 00:13:56.332 29193.309 - 29312.465: 99.1329% ( 3) 00:13:56.332 29312.465 - 29431.622: 99.1501% ( 2) 00:13:56.332 29431.622 - 29550.778: 99.1758% ( 3) 00:13:56.332 29550.778 - 29669.935: 99.1930% ( 2) 00:13:56.332 29669.935 - 29789.091: 99.2102% ( 2) 00:13:56.332 29789.091 - 29908.247: 99.2359% ( 3) 00:13:56.332 29908.247 - 30027.404: 99.2617% ( 3) 00:13:56.332 30027.404 - 30146.560: 99.2788% ( 2) 00:13:56.332 30146.560 - 30265.716: 99.3046% ( 3) 00:13:56.332 30265.716 - 30384.873: 99.3218% ( 2) 00:13:56.332 30384.873 - 30504.029: 99.3475% ( 3) 00:13:56.332 30504.029 - 30742.342: 99.3819% ( 4) 00:13:56.332 30742.342 - 30980.655: 99.4334% ( 6) 00:13:56.332 30980.655 - 31218.967: 99.4505% ( 2) 00:13:56.332 36700.160 - 36938.473: 99.5021% ( 6) 00:13:56.332 36938.473 - 37176.785: 99.5536% ( 6) 00:13:56.332 37176.785 - 37415.098: 99.5965% ( 5) 00:13:56.332 37415.098 - 37653.411: 99.6480% ( 6) 00:13:56.332 37653.411 - 37891.724: 99.6995% ( 6) 00:13:56.332 37891.724 - 38130.036: 99.7510% ( 6) 00:13:56.332 38130.036 - 38368.349: 99.8025% ( 6) 00:13:56.332 38368.349 - 38606.662: 99.8541% ( 6) 00:13:56.332 38606.662 - 38844.975: 99.9056% ( 6) 00:13:56.332 38844.975 - 39083.287: 99.9485% ( 5) 00:13:56.332 39083.287 - 39321.600: 100.0000% ( 6) 00:13:56.332 00:13:56.332 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:56.332 ============================================================================== 00:13:56.332 Range in us Cumulative IO count 00:13:56.332 8817.571 - 8877.149: 0.0429% ( 5) 00:13:56.332 8877.149 - 8936.727: 0.0944% ( 6) 00:13:56.332 8936.727 - 8996.305: 0.1459% ( 6) 00:13:56.332 8996.305 - 9055.884: 0.2404% ( 11) 00:13:56.332 9055.884 - 9115.462: 0.4121% ( 20) 00:13:56.332 9115.462 - 9175.040: 0.4636% ( 6) 00:13:56.332 9175.040 - 9234.618: 0.4979% ( 4) 00:13:56.332 9234.618 - 9294.196: 0.5838% ( 10) 00:13:56.332 9294.196 - 9353.775: 0.8585% ( 32) 00:13:56.332 9353.775 - 9413.353: 1.2448% ( 45) 00:13:56.332 9413.353 - 9472.931: 1.8372% ( 69) 00:13:56.332 9472.931 - 9532.509: 2.6786% ( 98) 00:13:56.332 9532.509 - 9592.087: 3.5371% ( 100) 00:13:56.332 9592.087 - 9651.665: 4.1981% ( 77) 00:13:56.332 9651.665 - 9711.244: 5.2198% ( 119) 00:13:56.332 9711.244 - 9770.822: 6.7479% ( 178) 00:13:56.332 9770.822 - 9830.400: 8.3019% ( 181) 00:13:56.332 9830.400 - 9889.978: 9.6068% ( 152) 00:13:56.332 9889.978 - 9949.556: 11.5385% ( 225) 00:13:56.332 9949.556 - 10009.135: 13.7019% ( 252) 00:13:56.332 10009.135 - 10068.713: 15.7366% ( 237) 00:13:56.332 10068.713 - 10128.291: 18.1405% ( 280) 00:13:56.332 10128.291 - 10187.869: 21.5058% ( 392) 00:13:56.332 10187.869 - 10247.447: 25.5151% ( 467) 00:13:56.332 10247.447 - 10307.025: 29.1295% ( 421) 00:13:56.332 10307.025 - 10366.604: 33.7655% ( 540) 00:13:56.332 10366.604 - 10426.182: 38.6075% ( 564) 00:13:56.332 10426.182 - 10485.760: 42.5738% ( 462) 00:13:56.332 10485.760 - 10545.338: 46.3942% ( 445) 00:13:56.332 10545.338 - 10604.916: 50.8585% ( 520) 00:13:56.332 10604.916 - 10664.495: 54.8935% ( 470) 00:13:56.333 10664.495 - 10724.073: 58.8427% ( 460) 00:13:56.333 10724.073 - 10783.651: 63.1095% ( 497) 00:13:56.333 10783.651 - 10843.229: 66.1659% ( 356) 00:13:56.333 10843.229 - 10902.807: 69.2909% ( 364) 00:13:56.333 10902.807 - 10962.385: 73.2830% ( 465) 00:13:56.333 10962.385 - 11021.964: 76.1161% ( 330) 00:13:56.333 11021.964 - 11081.542: 79.2926% ( 370) 00:13:56.333 11081.542 - 11141.120: 81.5591% ( 264) 00:13:56.333 11141.120 - 11200.698: 83.5852% ( 236) 00:13:56.333 11200.698 - 11260.276: 85.2679% ( 196) 00:13:56.333 11260.276 - 11319.855: 86.6501% ( 161) 00:13:56.333 11319.855 - 11379.433: 88.0838% ( 167) 00:13:56.333 11379.433 - 11439.011: 89.2514% ( 136) 00:13:56.333 11439.011 - 11498.589: 90.2301% ( 114) 00:13:56.333 11498.589 - 11558.167: 90.9856% ( 88) 00:13:56.333 11558.167 - 11617.745: 91.6552% ( 78) 00:13:56.333 11617.745 - 11677.324: 92.3506% ( 81) 00:13:56.333 11677.324 - 11736.902: 92.8400% ( 57) 00:13:56.333 11736.902 - 11796.480: 93.2778% ( 51) 00:13:56.333 11796.480 - 11856.058: 93.9560% ( 79) 00:13:56.333 11856.058 - 11915.636: 94.4797% ( 61) 00:13:56.333 11915.636 - 11975.215: 94.7373% ( 30) 00:13:56.333 11975.215 - 12034.793: 95.1065% ( 43) 00:13:56.333 12034.793 - 12094.371: 95.4241% ( 37) 00:13:56.333 12094.371 - 12153.949: 95.7246% ( 35) 00:13:56.333 12153.949 - 12213.527: 96.0165% ( 34) 00:13:56.333 12213.527 - 12273.105: 96.2569% ( 28) 00:13:56.333 12273.105 - 12332.684: 96.4543% ( 23) 00:13:56.333 12332.684 - 12392.262: 96.7205% ( 31) 00:13:56.333 12392.262 - 12451.840: 97.0124% ( 34) 00:13:56.333 12451.840 - 12511.418: 97.2442% ( 27) 00:13:56.333 12511.418 - 12570.996: 97.5189% ( 32) 00:13:56.333 12570.996 - 12630.575: 97.8537% ( 39) 00:13:56.333 12630.575 - 12690.153: 97.9481% ( 11) 00:13:56.333 12690.153 - 12749.731: 98.0168% ( 8) 00:13:56.333 12749.731 - 12809.309: 98.0683% ( 6) 00:13:56.333 12809.309 - 12868.887: 98.1198% ( 6) 00:13:56.333 12868.887 - 12928.465: 98.1542% ( 4) 00:13:56.333 12928.465 - 12988.044: 98.2057% ( 6) 00:13:56.333 12988.044 - 13047.622: 98.2486% ( 5) 00:13:56.333 13047.622 - 13107.200: 98.2916% ( 5) 00:13:56.333 13107.200 - 13166.778: 98.3259% ( 4) 00:13:56.333 13166.778 - 13226.356: 98.3345% ( 1) 00:13:56.333 13226.356 - 13285.935: 98.3516% ( 2) 00:13:56.333 13405.091 - 13464.669: 98.3602% ( 1) 00:13:56.333 13524.247 - 13583.825: 98.3688% ( 1) 00:13:56.333 13583.825 - 13643.404: 98.3946% ( 3) 00:13:56.333 13643.404 - 13702.982: 98.4203% ( 3) 00:13:56.333 13702.982 - 13762.560: 98.4461% ( 3) 00:13:56.333 13762.560 - 13822.138: 98.4718% ( 3) 00:13:56.333 13822.138 - 13881.716: 98.4976% ( 3) 00:13:56.333 13881.716 - 13941.295: 98.5148% ( 2) 00:13:56.333 13941.295 - 14000.873: 98.5405% ( 3) 00:13:56.333 14000.873 - 14060.451: 98.5749% ( 4) 00:13:56.333 14060.451 - 14120.029: 98.6006% ( 3) 00:13:56.333 14120.029 - 14179.607: 98.6264% ( 3) 00:13:56.333 14179.607 - 14239.185: 98.6435% ( 2) 00:13:56.333 14239.185 - 14298.764: 98.6693% ( 3) 00:13:56.333 14298.764 - 14358.342: 98.7294% ( 7) 00:13:56.333 14358.342 - 14417.920: 98.7981% ( 8) 00:13:56.333 14417.920 - 14477.498: 98.8067% ( 1) 00:13:56.333 14477.498 - 14537.076: 98.8238% ( 2) 00:13:56.333 14537.076 - 14596.655: 98.8410% ( 2) 00:13:56.333 14596.655 - 14656.233: 98.8496% ( 1) 00:13:56.333 14656.233 - 14715.811: 98.8582% ( 1) 00:13:56.333 14715.811 - 14775.389: 98.8839% ( 3) 00:13:56.333 14775.389 - 14834.967: 98.8925% ( 1) 00:13:56.333 14834.967 - 14894.545: 98.9011% ( 1) 00:13:56.333 26214.400 - 26333.556: 98.9440% ( 5) 00:13:56.333 26333.556 - 26452.713: 98.9784% ( 4) 00:13:56.333 26452.713 - 26571.869: 99.0385% ( 7) 00:13:56.333 26571.869 - 26691.025: 99.0556% ( 2) 00:13:56.333 26691.025 - 26810.182: 99.1071% ( 6) 00:13:56.333 26810.182 - 26929.338: 99.1329% ( 3) 00:13:56.333 26929.338 - 27048.495: 99.1587% ( 3) 00:13:56.333 27048.495 - 27167.651: 99.1758% ( 2) 00:13:56.333 27167.651 - 27286.807: 99.1930% ( 2) 00:13:56.333 27286.807 - 27405.964: 99.2102% ( 2) 00:13:56.333 27405.964 - 27525.120: 99.2359% ( 3) 00:13:56.333 27525.120 - 27644.276: 99.2531% ( 2) 00:13:56.333 27644.276 - 27763.433: 99.2703% ( 2) 00:13:56.333 27763.433 - 27882.589: 99.2960% ( 3) 00:13:56.333 27882.589 - 28001.745: 99.3218% ( 3) 00:13:56.333 28001.745 - 28120.902: 99.3389% ( 2) 00:13:56.333 28120.902 - 28240.058: 99.3647% ( 3) 00:13:56.333 28240.058 - 28359.215: 99.3819% ( 2) 00:13:56.333 28359.215 - 28478.371: 99.4076% ( 3) 00:13:56.333 28478.371 - 28597.527: 99.4334% ( 3) 00:13:56.333 28597.527 - 28716.684: 99.4505% ( 2) 00:13:56.333 31933.905 - 32172.218: 99.4677% ( 2) 00:13:56.333 32172.218 - 32410.531: 99.5450% ( 9) 00:13:56.333 32410.531 - 32648.844: 99.5536% ( 1) 00:13:56.333 33840.407 - 34078.720: 99.5793% ( 3) 00:13:56.333 34078.720 - 34317.033: 99.6223% ( 5) 00:13:56.333 34317.033 - 34555.345: 99.6738% ( 6) 00:13:56.333 34555.345 - 34793.658: 99.7253% ( 6) 00:13:56.333 34793.658 - 35031.971: 99.7682% ( 5) 00:13:56.333 35031.971 - 35270.284: 99.8197% ( 6) 00:13:56.333 35270.284 - 35508.596: 99.8712% ( 6) 00:13:56.333 35508.596 - 35746.909: 99.9141% ( 5) 00:13:56.333 35746.909 - 35985.222: 99.9571% ( 5) 00:13:56.333 35985.222 - 36223.535: 100.0000% ( 5) 00:13:56.333 00:13:56.333 ************************************ 00:13:56.333 END TEST nvme_perf 00:13:56.333 ************************************ 00:13:56.333 13:08:43 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:13:56.333 00:13:56.333 real 0m2.801s 00:13:56.333 user 0m2.353s 00:13:56.333 sys 0m0.336s 00:13:56.333 13:08:43 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.333 13:08:43 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:13:56.590 13:08:43 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:56.590 13:08:43 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:56.590 13:08:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.590 13:08:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.590 ************************************ 00:13:56.590 START TEST nvme_hello_world 00:13:56.590 ************************************ 00:13:56.590 13:08:43 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:56.848 Initializing NVMe Controllers 00:13:56.848 Attached to 0000:00:10.0 00:13:56.848 Namespace ID: 1 size: 6GB 00:13:56.848 Attached to 0000:00:11.0 00:13:56.848 Namespace ID: 1 size: 5GB 00:13:56.848 Attached to 0000:00:13.0 00:13:56.848 Namespace ID: 1 size: 1GB 00:13:56.848 Attached to 0000:00:12.0 00:13:56.848 Namespace ID: 1 size: 4GB 00:13:56.848 Namespace ID: 2 size: 4GB 00:13:56.848 Namespace ID: 3 size: 4GB 00:13:56.848 Initialization complete. 00:13:56.848 INFO: using host memory buffer for IO 00:13:56.848 Hello world! 00:13:56.848 INFO: using host memory buffer for IO 00:13:56.848 Hello world! 00:13:56.848 INFO: using host memory buffer for IO 00:13:56.848 Hello world! 00:13:56.848 INFO: using host memory buffer for IO 00:13:56.848 Hello world! 00:13:56.848 INFO: using host memory buffer for IO 00:13:56.848 Hello world! 00:13:56.848 INFO: using host memory buffer for IO 00:13:56.848 Hello world! 00:13:56.848 ************************************ 00:13:56.848 END TEST nvme_hello_world 00:13:56.848 ************************************ 00:13:56.848 00:13:56.848 real 0m0.356s 00:13:56.848 user 0m0.149s 00:13:56.848 sys 0m0.159s 00:13:56.848 13:08:43 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.848 13:08:43 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:56.848 13:08:43 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:56.848 13:08:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.848 13:08:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.848 13:08:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.848 ************************************ 00:13:56.848 START TEST nvme_sgl 00:13:56.848 ************************************ 00:13:56.848 13:08:43 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:57.105 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:13:57.105 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:13:57.105 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:13:57.105 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:13:57.105 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:13:57.105 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:13:57.105 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:13:57.105 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:13:57.363 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:13:57.363 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:13:57.363 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:13:57.363 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:13:57.363 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:13:57.363 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:13:57.363 NVMe Readv/Writev Request test 00:13:57.363 Attached to 0000:00:10.0 00:13:57.363 Attached to 0000:00:11.0 00:13:57.363 Attached to 0000:00:13.0 00:13:57.363 Attached to 0000:00:12.0 00:13:57.363 0000:00:10.0: build_io_request_2 test passed 00:13:57.363 0000:00:10.0: build_io_request_4 test passed 00:13:57.363 0000:00:10.0: build_io_request_5 test passed 00:13:57.363 0000:00:10.0: build_io_request_6 test passed 00:13:57.363 0000:00:10.0: build_io_request_7 test passed 00:13:57.363 0000:00:10.0: build_io_request_10 test passed 00:13:57.363 0000:00:11.0: build_io_request_2 test passed 00:13:57.363 0000:00:11.0: build_io_request_4 test passed 00:13:57.363 0000:00:11.0: build_io_request_5 test passed 00:13:57.363 0000:00:11.0: build_io_request_6 test passed 00:13:57.363 0000:00:11.0: build_io_request_7 test passed 00:13:57.363 0000:00:11.0: build_io_request_10 test passed 00:13:57.363 Cleaning up... 00:13:57.363 ************************************ 00:13:57.363 END TEST nvme_sgl 00:13:57.363 ************************************ 00:13:57.363 00:13:57.363 real 0m0.436s 00:13:57.363 user 0m0.230s 00:13:57.363 sys 0m0.155s 00:13:57.363 13:08:44 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.363 13:08:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:13:57.363 13:08:44 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:57.363 13:08:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.363 13:08:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.363 13:08:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.363 ************************************ 00:13:57.363 START TEST nvme_e2edp 00:13:57.363 ************************************ 00:13:57.363 13:08:44 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:57.620 NVMe Write/Read with End-to-End data protection test 00:13:57.620 Attached to 0000:00:10.0 00:13:57.620 Attached to 0000:00:11.0 00:13:57.620 Attached to 0000:00:13.0 00:13:57.620 Attached to 0000:00:12.0 00:13:57.620 Cleaning up... 00:13:57.620 ************************************ 00:13:57.620 END TEST nvme_e2edp 00:13:57.620 ************************************ 00:13:57.620 00:13:57.620 real 0m0.336s 00:13:57.620 user 0m0.135s 00:13:57.620 sys 0m0.158s 00:13:57.620 13:08:44 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.620 13:08:44 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:13:57.620 13:08:44 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:57.620 13:08:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.620 13:08:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.620 13:08:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.620 ************************************ 00:13:57.620 START TEST nvme_reserve 00:13:57.620 ************************************ 00:13:57.620 13:08:44 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:58.186 ===================================================== 00:13:58.186 NVMe Controller at PCI bus 0, device 16, function 0 00:13:58.186 ===================================================== 00:13:58.186 Reservations: Not Supported 00:13:58.186 ===================================================== 00:13:58.186 NVMe Controller at PCI bus 0, device 17, function 0 00:13:58.186 ===================================================== 00:13:58.186 Reservations: Not Supported 00:13:58.186 ===================================================== 00:13:58.186 NVMe Controller at PCI bus 0, device 19, function 0 00:13:58.186 ===================================================== 00:13:58.186 Reservations: Not Supported 00:13:58.186 ===================================================== 00:13:58.186 NVMe Controller at PCI bus 0, device 18, function 0 00:13:58.186 ===================================================== 00:13:58.186 Reservations: Not Supported 00:13:58.186 Reservation test passed 00:13:58.186 ************************************ 00:13:58.186 END TEST nvme_reserve 00:13:58.186 ************************************ 00:13:58.186 00:13:58.186 real 0m0.372s 00:13:58.186 user 0m0.148s 00:13:58.186 sys 0m0.169s 00:13:58.186 13:08:44 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.186 13:08:45 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:13:58.186 13:08:45 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:58.186 13:08:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.186 13:08:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.186 13:08:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.186 ************************************ 00:13:58.186 START TEST nvme_err_injection 00:13:58.186 ************************************ 00:13:58.186 13:08:45 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:58.443 NVMe Error Injection test 00:13:58.443 Attached to 0000:00:10.0 00:13:58.443 Attached to 0000:00:11.0 00:13:58.443 Attached to 0000:00:13.0 00:13:58.443 Attached to 0000:00:12.0 00:13:58.443 0000:00:11.0: get features failed as expected 00:13:58.443 0000:00:13.0: get features failed as expected 00:13:58.443 0000:00:12.0: get features failed as expected 00:13:58.443 0000:00:10.0: get features failed as expected 00:13:58.443 0000:00:10.0: get features successfully as expected 00:13:58.443 0000:00:11.0: get features successfully as expected 00:13:58.443 0000:00:13.0: get features successfully as expected 00:13:58.443 0000:00:12.0: get features successfully as expected 00:13:58.443 0000:00:10.0: read failed as expected 00:13:58.443 0000:00:11.0: read failed as expected 00:13:58.443 0000:00:13.0: read failed as expected 00:13:58.443 0000:00:12.0: read failed as expected 00:13:58.443 0000:00:10.0: read successfully as expected 00:13:58.443 0000:00:11.0: read successfully as expected 00:13:58.443 0000:00:13.0: read successfully as expected 00:13:58.443 0000:00:12.0: read successfully as expected 00:13:58.443 Cleaning up... 00:13:58.443 ************************************ 00:13:58.443 END TEST nvme_err_injection 00:13:58.443 ************************************ 00:13:58.443 00:13:58.443 real 0m0.359s 00:13:58.443 user 0m0.150s 00:13:58.443 sys 0m0.162s 00:13:58.443 13:08:45 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.443 13:08:45 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:13:58.443 13:08:45 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:58.443 13:08:45 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:58.443 13:08:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.443 13:08:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.443 ************************************ 00:13:58.443 START TEST nvme_overhead 00:13:58.443 ************************************ 00:13:58.443 13:08:45 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:59.817 Initializing NVMe Controllers 00:13:59.817 Attached to 0000:00:10.0 00:13:59.817 Attached to 0000:00:11.0 00:13:59.817 Attached to 0000:00:13.0 00:13:59.817 Attached to 0000:00:12.0 00:13:59.817 Initialization complete. Launching workers. 00:13:59.817 submit (in ns) avg, min, max = 15651.9, 14086.8, 94861.8 00:13:59.817 complete (in ns) avg, min, max = 10143.5, 9005.9, 113404.5 00:13:59.817 00:13:59.817 Submit histogram 00:13:59.817 ================ 00:13:59.817 Range in us Cumulative Count 00:13:59.817 14.080 - 14.138: 0.0209% ( 2) 00:13:59.817 14.138 - 14.196: 0.2607% ( 23) 00:13:59.817 14.196 - 14.255: 1.1680% ( 87) 00:13:59.817 14.255 - 14.313: 3.1599% ( 191) 00:13:59.817 14.313 - 14.371: 7.8527% ( 450) 00:13:59.817 14.371 - 14.429: 14.6835% ( 655) 00:13:59.817 14.429 - 14.487: 22.9325% ( 791) 00:13:59.817 14.487 - 14.545: 31.5674% ( 828) 00:13:59.817 14.545 - 14.604: 39.7226% ( 782) 00:13:59.817 14.604 - 14.662: 46.6472% ( 664) 00:13:59.817 14.662 - 14.720: 52.7271% ( 583) 00:13:59.817 14.720 - 14.778: 57.6807% ( 475) 00:13:59.817 14.778 - 14.836: 61.6853% ( 384) 00:13:59.817 14.836 - 14.895: 65.0537% ( 323) 00:13:59.817 14.895 - 15.011: 69.7674% ( 452) 00:13:59.817 15.011 - 15.127: 73.1255% ( 322) 00:13:59.817 15.127 - 15.244: 76.2436% ( 299) 00:13:59.817 15.244 - 15.360: 78.4858% ( 215) 00:13:59.817 15.360 - 15.476: 80.5089% ( 194) 00:13:59.817 15.476 - 15.593: 81.7916% ( 123) 00:13:59.817 15.593 - 15.709: 82.8554% ( 102) 00:13:59.817 15.709 - 15.825: 83.8774% ( 98) 00:13:59.817 15.825 - 15.942: 84.5135% ( 61) 00:13:59.817 15.942 - 16.058: 85.1601% ( 62) 00:13:59.817 16.058 - 16.175: 85.5459% ( 37) 00:13:59.817 16.175 - 16.291: 85.7962% ( 24) 00:13:59.817 16.291 - 16.407: 86.0569% ( 25) 00:13:59.817 16.407 - 16.524: 86.2134% ( 15) 00:13:59.817 16.524 - 16.640: 86.3385% ( 12) 00:13:59.817 16.640 - 16.756: 86.4949% ( 15) 00:13:59.817 16.756 - 16.873: 86.5679% ( 7) 00:13:59.817 16.873 - 16.989: 86.6305% ( 6) 00:13:59.817 16.989 - 17.105: 86.7557% ( 12) 00:13:59.817 17.105 - 17.222: 86.7974% ( 4) 00:13:59.817 17.222 - 17.338: 86.8391% ( 4) 00:13:59.817 17.338 - 17.455: 87.0268% ( 18) 00:13:59.817 17.455 - 17.571: 87.6004% ( 55) 00:13:59.817 17.571 - 17.687: 88.4242% ( 79) 00:13:59.817 17.687 - 17.804: 89.3002% ( 84) 00:13:59.817 17.804 - 17.920: 89.8947% ( 57) 00:13:59.817 17.920 - 18.036: 90.3014% ( 39) 00:13:59.817 18.036 - 18.153: 90.5517% ( 24) 00:13:59.817 18.153 - 18.269: 90.7290% ( 17) 00:13:59.817 18.269 - 18.385: 90.8958% ( 16) 00:13:59.817 18.385 - 18.502: 91.1044% ( 20) 00:13:59.817 18.502 - 18.618: 91.3651% ( 25) 00:13:59.817 18.618 - 18.735: 91.7093% ( 33) 00:13:59.817 18.735 - 18.851: 91.9387% ( 22) 00:13:59.817 18.851 - 18.967: 92.0430% ( 10) 00:13:59.817 18.967 - 19.084: 92.1890% ( 14) 00:13:59.817 19.084 - 19.200: 92.3141% ( 12) 00:13:59.817 19.200 - 19.316: 92.3663% ( 5) 00:13:59.817 19.316 - 19.433: 92.4288% ( 6) 00:13:59.817 19.433 - 19.549: 92.4705% ( 4) 00:13:59.817 19.549 - 19.665: 92.5331% ( 6) 00:13:59.817 19.665 - 19.782: 92.6270% ( 9) 00:13:59.817 19.782 - 19.898: 92.7104% ( 8) 00:13:59.817 19.898 - 20.015: 92.7730% ( 6) 00:13:59.817 20.015 - 20.131: 92.8668% ( 9) 00:13:59.817 20.131 - 20.247: 92.9294% ( 6) 00:13:59.817 20.247 - 20.364: 92.9711% ( 4) 00:13:59.817 20.364 - 20.480: 93.0858% ( 11) 00:13:59.817 20.480 - 20.596: 93.1484% ( 6) 00:13:59.817 20.596 - 20.713: 93.3153% ( 16) 00:13:59.817 20.713 - 20.829: 93.5238% ( 20) 00:13:59.817 20.829 - 20.945: 93.6594% ( 13) 00:13:59.817 20.945 - 21.062: 93.8367% ( 17) 00:13:59.817 21.062 - 21.178: 94.0244% ( 18) 00:13:59.817 21.178 - 21.295: 94.2747% ( 24) 00:13:59.817 21.295 - 21.411: 94.5458% ( 26) 00:13:59.817 21.411 - 21.527: 94.7544% ( 20) 00:13:59.817 21.527 - 21.644: 94.8900% ( 13) 00:13:59.817 21.644 - 21.760: 94.9943% ( 10) 00:13:59.817 21.760 - 21.876: 95.1403% ( 14) 00:13:59.817 21.876 - 21.993: 95.2863% ( 14) 00:13:59.817 21.993 - 22.109: 95.4114% ( 12) 00:13:59.817 22.109 - 22.225: 95.5574% ( 14) 00:13:59.817 22.225 - 22.342: 95.7034% ( 14) 00:13:59.817 22.342 - 22.458: 95.8286% ( 12) 00:13:59.817 22.458 - 22.575: 95.9328% ( 10) 00:13:59.817 22.575 - 22.691: 96.0371% ( 10) 00:13:59.817 22.691 - 22.807: 96.1831% ( 14) 00:13:59.817 22.807 - 22.924: 96.2353% ( 5) 00:13:59.817 22.924 - 23.040: 96.3291% ( 9) 00:13:59.817 23.040 - 23.156: 96.4647% ( 13) 00:13:59.817 23.156 - 23.273: 96.5586% ( 9) 00:13:59.817 23.273 - 23.389: 96.6837% ( 12) 00:13:59.817 23.389 - 23.505: 96.7880% ( 10) 00:13:59.817 23.505 - 23.622: 96.9236% ( 13) 00:13:59.817 23.622 - 23.738: 97.0696% ( 14) 00:13:59.817 23.738 - 23.855: 97.2468% ( 17) 00:13:59.817 23.855 - 23.971: 97.3720% ( 12) 00:13:59.817 23.971 - 24.087: 97.4450% ( 7) 00:13:59.817 24.087 - 24.204: 97.5180% ( 7) 00:13:59.817 24.204 - 24.320: 97.6014% ( 8) 00:13:59.817 24.320 - 24.436: 97.6640% ( 6) 00:13:59.817 24.436 - 24.553: 97.7474% ( 8) 00:13:59.817 24.553 - 24.669: 97.8413% ( 9) 00:13:59.817 24.669 - 24.785: 97.9456% ( 10) 00:13:59.817 24.785 - 24.902: 98.0603% ( 11) 00:13:59.817 24.902 - 25.018: 98.1541% ( 9) 00:13:59.817 25.018 - 25.135: 98.2376% ( 8) 00:13:59.817 25.135 - 25.251: 98.3106% ( 7) 00:13:59.817 25.251 - 25.367: 98.4149% ( 10) 00:13:59.817 25.367 - 25.484: 98.4774% ( 6) 00:13:59.817 25.484 - 25.600: 98.5400% ( 6) 00:13:59.817 25.600 - 25.716: 98.5817% ( 4) 00:13:59.817 25.716 - 25.833: 98.7069% ( 12) 00:13:59.817 25.833 - 25.949: 98.7903% ( 8) 00:13:59.817 25.949 - 26.065: 98.8216% ( 3) 00:13:59.817 26.065 - 26.182: 98.9154% ( 9) 00:13:59.817 26.182 - 26.298: 98.9780% ( 6) 00:13:59.817 26.298 - 26.415: 99.0719% ( 9) 00:13:59.817 26.415 - 26.531: 99.1031% ( 3) 00:13:59.817 26.531 - 26.647: 99.1344% ( 3) 00:13:59.817 26.647 - 26.764: 99.1657% ( 3) 00:13:59.817 26.764 - 26.880: 99.2179% ( 5) 00:13:59.817 26.880 - 26.996: 99.2596% ( 4) 00:13:59.817 26.996 - 27.113: 99.2909% ( 3) 00:13:59.817 27.113 - 27.229: 99.3326% ( 4) 00:13:59.817 27.229 - 27.345: 99.3430% ( 1) 00:13:59.817 27.462 - 27.578: 99.3951% ( 5) 00:13:59.817 27.578 - 27.695: 99.4264% ( 3) 00:13:59.817 27.811 - 27.927: 99.4473% ( 2) 00:13:59.817 27.927 - 28.044: 99.4890% ( 4) 00:13:59.817 28.160 - 28.276: 99.4994% ( 1) 00:13:59.817 28.276 - 28.393: 99.5203% ( 2) 00:13:59.817 28.509 - 28.625: 99.5307% ( 1) 00:13:59.817 28.625 - 28.742: 99.5411% ( 1) 00:13:59.817 28.975 - 29.091: 99.5620% ( 2) 00:13:59.817 29.207 - 29.324: 99.5724% ( 1) 00:13:59.817 29.440 - 29.556: 99.5933% ( 2) 00:13:59.817 29.789 - 30.022: 99.6037% ( 1) 00:13:59.817 30.255 - 30.487: 99.6141% ( 1) 00:13:59.817 30.953 - 31.185: 99.6246% ( 1) 00:13:59.817 31.185 - 31.418: 99.6454% ( 2) 00:13:59.817 31.418 - 31.651: 99.6559% ( 1) 00:13:59.817 31.651 - 31.884: 99.6871% ( 3) 00:13:59.817 32.116 - 32.349: 99.7080% ( 2) 00:13:59.817 32.349 - 32.582: 99.7184% ( 1) 00:13:59.817 32.582 - 32.815: 99.7393% ( 2) 00:13:59.817 32.815 - 33.047: 99.7497% ( 1) 00:13:59.817 33.047 - 33.280: 99.7601% ( 1) 00:13:59.817 33.513 - 33.745: 99.7706% ( 1) 00:13:59.817 34.211 - 34.444: 99.7810% ( 1) 00:13:59.817 34.444 - 34.676: 99.8019% ( 2) 00:13:59.817 34.676 - 34.909: 99.8123% ( 1) 00:13:59.817 34.909 - 35.142: 99.8227% ( 1) 00:13:59.817 35.142 - 35.375: 99.8331% ( 1) 00:13:59.817 35.607 - 35.840: 99.8540% ( 2) 00:13:59.817 36.771 - 37.004: 99.8644% ( 1) 00:13:59.817 37.702 - 37.935: 99.8749% ( 1) 00:13:59.817 37.935 - 38.167: 99.8853% ( 1) 00:13:59.817 39.098 - 39.331: 99.8957% ( 1) 00:13:59.817 42.124 - 42.356: 99.9061% ( 1) 00:13:59.817 43.753 - 43.985: 99.9166% ( 1) 00:13:59.817 47.942 - 48.175: 99.9270% ( 1) 00:13:59.817 51.665 - 51.898: 99.9374% ( 1) 00:13:59.817 70.749 - 71.215: 99.9479% ( 1) 00:13:59.817 76.335 - 76.800: 99.9583% ( 1) 00:13:59.817 80.524 - 80.989: 99.9687% ( 1) 00:13:59.817 88.902 - 89.367: 99.9791% ( 1) 00:13:59.818 91.695 - 92.160: 99.9896% ( 1) 00:13:59.818 94.487 - 94.953: 100.0000% ( 1) 00:13:59.818 00:13:59.818 Complete histogram 00:13:59.818 ================== 00:13:59.818 Range in us Cumulative Count 00:13:59.818 8.960 - 9.018: 0.0209% ( 2) 00:13:59.818 9.018 - 9.076: 0.3233% ( 29) 00:13:59.818 9.076 - 9.135: 1.9397% ( 155) 00:13:59.818 9.135 - 9.193: 7.1332% ( 498) 00:13:59.818 9.193 - 9.251: 16.2895% ( 878) 00:13:59.818 9.251 - 9.309: 30.0657% ( 1321) 00:13:59.818 9.309 - 9.367: 44.9786% ( 1430) 00:13:59.818 9.367 - 9.425: 56.7630% ( 1130) 00:13:59.818 9.425 - 9.484: 65.5230% ( 840) 00:13:59.818 9.484 - 9.542: 70.6747% ( 494) 00:13:59.818 9.542 - 9.600: 73.6052% ( 281) 00:13:59.818 9.600 - 9.658: 75.2008% ( 153) 00:13:59.818 9.658 - 9.716: 76.0768% ( 84) 00:13:59.818 9.716 - 9.775: 76.7338% ( 63) 00:13:59.818 9.775 - 9.833: 77.2030% ( 45) 00:13:59.818 9.833 - 9.891: 77.3908% ( 18) 00:13:59.818 9.891 - 9.949: 77.6306% ( 23) 00:13:59.818 9.949 - 10.007: 77.9018% ( 26) 00:13:59.818 10.007 - 10.065: 78.1520% ( 24) 00:13:59.818 10.065 - 10.124: 78.5588% ( 39) 00:13:59.818 10.124 - 10.182: 78.9863% ( 41) 00:13:59.818 10.182 - 10.240: 79.5182% ( 51) 00:13:59.818 10.240 - 10.298: 80.2899% ( 74) 00:13:59.818 10.298 - 10.356: 81.1659% ( 84) 00:13:59.818 10.356 - 10.415: 82.1775% ( 97) 00:13:59.818 10.415 - 10.473: 83.1161% ( 90) 00:13:59.818 10.473 - 10.531: 83.9295% ( 78) 00:13:59.818 10.531 - 10.589: 84.5969% ( 64) 00:13:59.818 10.589 - 10.647: 85.2852% ( 66) 00:13:59.818 10.647 - 10.705: 85.7232% ( 42) 00:13:59.818 10.705 - 10.764: 86.0882% ( 35) 00:13:59.818 10.764 - 10.822: 86.3594% ( 26) 00:13:59.818 10.822 - 10.880: 86.5262% ( 16) 00:13:59.818 10.880 - 10.938: 86.6201% ( 9) 00:13:59.818 10.938 - 10.996: 86.7452% ( 12) 00:13:59.818 10.996 - 11.055: 86.9017% ( 15) 00:13:59.818 11.055 - 11.113: 86.9747% ( 7) 00:13:59.818 11.113 - 11.171: 87.0164% ( 4) 00:13:59.818 11.171 - 11.229: 87.1102% ( 9) 00:13:59.818 11.229 - 11.287: 87.2041% ( 9) 00:13:59.818 11.287 - 11.345: 87.2458% ( 4) 00:13:59.818 11.345 - 11.404: 87.3188% ( 7) 00:13:59.818 11.404 - 11.462: 87.5482% ( 22) 00:13:59.818 11.462 - 11.520: 88.0279% ( 46) 00:13:59.818 11.520 - 11.578: 88.8622% ( 80) 00:13:59.818 11.578 - 11.636: 89.7382% ( 84) 00:13:59.818 11.636 - 11.695: 90.5204% ( 75) 00:13:59.818 11.695 - 11.753: 90.9792% ( 44) 00:13:59.818 11.753 - 11.811: 91.1774% ( 19) 00:13:59.818 11.811 - 11.869: 91.3442% ( 16) 00:13:59.818 11.869 - 11.927: 91.5320% ( 18) 00:13:59.818 11.927 - 11.985: 91.6154% ( 8) 00:13:59.818 11.985 - 12.044: 91.7197% ( 10) 00:13:59.818 12.044 - 12.102: 91.8761% ( 15) 00:13:59.818 12.102 - 12.160: 91.9908% ( 11) 00:13:59.818 12.160 - 12.218: 92.1264% ( 13) 00:13:59.818 12.218 - 12.276: 92.2515% ( 12) 00:13:59.818 12.276 - 12.335: 92.3767% ( 12) 00:13:59.818 12.335 - 12.393: 92.5123% ( 13) 00:13:59.818 12.393 - 12.451: 92.6687% ( 15) 00:13:59.818 12.451 - 12.509: 92.8877% ( 21) 00:13:59.818 12.509 - 12.567: 93.1275% ( 23) 00:13:59.818 12.567 - 12.625: 93.3361% ( 20) 00:13:59.818 12.625 - 12.684: 93.4925% ( 15) 00:13:59.818 12.684 - 12.742: 93.6803% ( 18) 00:13:59.818 12.742 - 12.800: 93.8263% ( 14) 00:13:59.818 12.800 - 12.858: 93.9618% ( 13) 00:13:59.818 12.858 - 12.916: 94.1078% ( 14) 00:13:59.818 12.916 - 12.975: 94.2434% ( 13) 00:13:59.818 12.975 - 13.033: 94.2955% ( 5) 00:13:59.818 13.033 - 13.091: 94.3685% ( 7) 00:13:59.818 13.091 - 13.149: 94.3998% ( 3) 00:13:59.818 13.149 - 13.207: 94.4207% ( 2) 00:13:59.818 13.207 - 13.265: 94.4520% ( 3) 00:13:59.818 13.265 - 13.324: 94.4624% ( 1) 00:13:59.818 13.324 - 13.382: 94.4937% ( 3) 00:13:59.818 13.382 - 13.440: 94.5563% ( 6) 00:13:59.818 13.440 - 13.498: 94.5875% ( 3) 00:13:59.818 13.498 - 13.556: 94.6188% ( 3) 00:13:59.818 13.556 - 13.615: 94.6605% ( 4) 00:13:59.818 13.615 - 13.673: 94.6918% ( 3) 00:13:59.818 13.673 - 13.731: 94.7127% ( 2) 00:13:59.818 13.731 - 13.789: 94.7753% ( 6) 00:13:59.818 13.789 - 13.847: 94.8274% ( 5) 00:13:59.818 13.847 - 13.905: 94.8483% ( 2) 00:13:59.818 13.905 - 13.964: 94.8795% ( 3) 00:13:59.818 13.964 - 14.022: 94.8900% ( 1) 00:13:59.818 14.022 - 14.080: 94.9108% ( 2) 00:13:59.818 14.080 - 14.138: 95.0047% ( 9) 00:13:59.818 14.138 - 14.196: 95.0464% ( 4) 00:13:59.818 14.196 - 14.255: 95.0673% ( 2) 00:13:59.818 14.255 - 14.313: 95.1403% ( 7) 00:13:59.818 14.313 - 14.371: 95.1820% ( 4) 00:13:59.818 14.371 - 14.429: 95.2237% ( 4) 00:13:59.818 14.487 - 14.545: 95.2654% ( 4) 00:13:59.818 14.545 - 14.604: 95.3280% ( 6) 00:13:59.818 14.604 - 14.662: 95.3593% ( 3) 00:13:59.818 14.662 - 14.720: 95.3906% ( 3) 00:13:59.818 14.778 - 14.836: 95.4531% ( 6) 00:13:59.818 14.836 - 14.895: 95.4844% ( 3) 00:13:59.818 14.895 - 15.011: 95.5991% ( 11) 00:13:59.818 15.011 - 15.127: 95.7138% ( 11) 00:13:59.818 15.127 - 15.244: 95.8286% ( 11) 00:13:59.818 15.244 - 15.360: 95.9537% ( 12) 00:13:59.818 15.360 - 15.476: 96.0267% ( 7) 00:13:59.818 15.476 - 15.593: 96.1206% ( 9) 00:13:59.818 15.593 - 15.709: 96.2248% ( 10) 00:13:59.818 15.709 - 15.825: 96.3187% ( 9) 00:13:59.818 15.825 - 15.942: 96.4230% ( 10) 00:13:59.818 15.942 - 16.058: 96.5586% ( 13) 00:13:59.818 16.058 - 16.175: 96.6628% ( 10) 00:13:59.818 16.175 - 16.291: 96.7776% ( 11) 00:13:59.818 16.291 - 16.407: 96.9131% ( 13) 00:13:59.818 16.407 - 16.524: 96.9757% ( 6) 00:13:59.818 16.524 - 16.640: 97.0904% ( 11) 00:13:59.818 16.640 - 16.756: 97.1738% ( 8) 00:13:59.818 16.756 - 16.873: 97.2781% ( 10) 00:13:59.818 16.873 - 16.989: 97.4241% ( 14) 00:13:59.818 16.989 - 17.105: 97.4763% ( 5) 00:13:59.818 17.105 - 17.222: 97.5910% ( 11) 00:13:59.818 17.222 - 17.338: 97.6327% ( 4) 00:13:59.818 17.338 - 17.455: 97.7474% ( 11) 00:13:59.818 17.455 - 17.571: 97.8100% ( 6) 00:13:59.818 17.571 - 17.687: 97.9038% ( 9) 00:13:59.818 17.687 - 17.804: 97.9977% ( 9) 00:13:59.818 17.804 - 17.920: 98.1333% ( 13) 00:13:59.818 17.920 - 18.036: 98.2376% ( 10) 00:13:59.818 18.036 - 18.153: 98.3314% ( 9) 00:13:59.818 18.153 - 18.269: 98.4357% ( 10) 00:13:59.818 18.269 - 18.385: 98.5191% ( 8) 00:13:59.818 18.385 - 18.502: 98.6026% ( 8) 00:13:59.818 18.502 - 18.618: 98.6547% ( 5) 00:13:59.818 18.618 - 18.735: 98.7381% ( 8) 00:13:59.818 18.735 - 18.851: 98.7694% ( 3) 00:13:59.818 18.851 - 18.967: 98.8529% ( 8) 00:13:59.818 18.967 - 19.084: 98.9154% ( 6) 00:13:59.818 19.084 - 19.200: 98.9363% ( 2) 00:13:59.818 19.200 - 19.316: 99.0093% ( 7) 00:13:59.818 19.316 - 19.433: 99.0406% ( 3) 00:13:59.818 19.433 - 19.549: 99.1136% ( 7) 00:13:59.818 19.549 - 19.665: 99.1240% ( 1) 00:13:59.818 19.665 - 19.782: 99.1657% ( 4) 00:13:59.818 19.782 - 19.898: 99.2387% ( 7) 00:13:59.818 19.898 - 20.015: 99.2596% ( 2) 00:13:59.818 20.015 - 20.131: 99.3221% ( 6) 00:13:59.818 20.131 - 20.247: 99.3534% ( 3) 00:13:59.818 20.247 - 20.364: 99.3951% ( 4) 00:13:59.818 20.364 - 20.480: 99.4264% ( 3) 00:13:59.818 20.480 - 20.596: 99.4577% ( 3) 00:13:59.818 20.596 - 20.713: 99.4786% ( 2) 00:13:59.818 20.713 - 20.829: 99.4994% ( 2) 00:13:59.818 20.829 - 20.945: 99.5307% ( 3) 00:13:59.818 20.945 - 21.062: 99.5516% ( 2) 00:13:59.818 21.062 - 21.178: 99.5620% ( 1) 00:13:59.818 21.178 - 21.295: 99.5933% ( 3) 00:13:59.818 21.295 - 21.411: 99.6350% ( 4) 00:13:59.818 21.527 - 21.644: 99.6454% ( 1) 00:13:59.818 21.644 - 21.760: 99.6559% ( 1) 00:13:59.818 21.760 - 21.876: 99.6663% ( 1) 00:13:59.818 21.993 - 22.109: 99.6767% ( 1) 00:13:59.818 22.458 - 22.575: 99.6871% ( 1) 00:13:59.818 22.575 - 22.691: 99.7080% ( 2) 00:13:59.818 22.924 - 23.040: 99.7184% ( 1) 00:13:59.818 23.971 - 24.087: 99.7289% ( 1) 00:13:59.818 24.669 - 24.785: 99.7393% ( 1) 00:13:59.818 25.367 - 25.484: 99.7497% ( 1) 00:13:59.818 25.484 - 25.600: 99.7601% ( 1) 00:13:59.818 25.833 - 25.949: 99.7810% ( 2) 00:13:59.818 26.647 - 26.764: 99.7914% ( 1) 00:13:59.818 26.764 - 26.880: 99.8019% ( 1) 00:13:59.818 27.578 - 27.695: 99.8123% ( 1) 00:13:59.818 27.927 - 28.044: 99.8227% ( 1) 00:13:59.818 28.276 - 28.393: 99.8331% ( 1) 00:13:59.818 28.858 - 28.975: 99.8436% ( 1) 00:13:59.818 29.207 - 29.324: 99.8540% ( 1) 00:13:59.818 30.022 - 30.255: 99.8644% ( 1) 00:13:59.818 31.418 - 31.651: 99.8749% ( 1) 00:13:59.818 33.513 - 33.745: 99.8853% ( 1) 00:13:59.818 33.745 - 33.978: 99.9061% ( 2) 00:13:59.818 33.978 - 34.211: 99.9166% ( 1) 00:13:59.818 34.444 - 34.676: 99.9270% ( 1) 00:13:59.818 34.909 - 35.142: 99.9374% ( 1) 00:13:59.818 77.265 - 77.731: 99.9479% ( 1) 00:13:59.819 78.196 - 78.662: 99.9583% ( 1) 00:13:59.819 88.902 - 89.367: 99.9687% ( 1) 00:13:59.819 93.091 - 93.556: 99.9791% ( 1) 00:13:59.819 112.175 - 112.640: 99.9896% ( 1) 00:13:59.819 113.105 - 113.571: 100.0000% ( 1) 00:13:59.819 00:13:59.819 ************************************ 00:13:59.819 END TEST nvme_overhead 00:13:59.819 ************************************ 00:13:59.819 00:13:59.819 real 0m1.339s 00:13:59.819 user 0m1.125s 00:13:59.819 sys 0m0.164s 00:13:59.819 13:08:46 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.819 13:08:46 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:14:00.076 13:08:46 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:00.076 13:08:46 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:00.076 13:08:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.076 13:08:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.076 ************************************ 00:14:00.076 START TEST nvme_arbitration 00:14:00.076 ************************************ 00:14:00.076 13:08:46 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:03.357 Initializing NVMe Controllers 00:14:03.357 Attached to 0000:00:10.0 00:14:03.357 Attached to 0000:00:11.0 00:14:03.357 Attached to 0000:00:13.0 00:14:03.357 Attached to 0000:00:12.0 00:14:03.357 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:14:03.357 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:14:03.357 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:14:03.357 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:14:03.357 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:14:03.357 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:14:03.357 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:03.357 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:14:03.357 Initialization complete. Launching workers. 00:14:03.357 Starting thread on core 1 with urgent priority queue 00:14:03.357 Starting thread on core 2 with urgent priority queue 00:14:03.357 Starting thread on core 3 with urgent priority queue 00:14:03.357 Starting thread on core 0 with urgent priority queue 00:14:03.357 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:14:03.357 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:14:03.357 QEMU NVMe Ctrl (12341 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:14:03.357 QEMU NVMe Ctrl (12342 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:14:03.357 QEMU NVMe Ctrl (12343 ) core 2: 725.33 IO/s 137.87 secs/100000 ios 00:14:03.357 QEMU NVMe Ctrl (12342 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:14:03.357 ======================================================== 00:14:03.357 00:14:03.357 ************************************ 00:14:03.357 END TEST nvme_arbitration 00:14:03.357 ************************************ 00:14:03.357 00:14:03.357 real 0m3.501s 00:14:03.357 user 0m9.454s 00:14:03.358 sys 0m0.192s 00:14:03.358 13:08:50 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.358 13:08:50 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:14:03.619 13:08:50 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:03.619 13:08:50 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:03.619 13:08:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.619 13:08:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.619 ************************************ 00:14:03.619 START TEST nvme_single_aen 00:14:03.619 ************************************ 00:14:03.619 13:08:50 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:03.877 Asynchronous Event Request test 00:14:03.877 Attached to 0000:00:10.0 00:14:03.877 Attached to 0000:00:11.0 00:14:03.877 Attached to 0000:00:13.0 00:14:03.877 Attached to 0000:00:12.0 00:14:03.877 Reset controller to setup AER completions for this process 00:14:03.877 Registering asynchronous event callbacks... 00:14:03.877 Getting orig temperature thresholds of all controllers 00:14:03.877 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:03.877 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:03.877 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:03.877 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:03.877 Setting all controllers temperature threshold low to trigger AER 00:14:03.877 Waiting for all controllers temperature threshold to be set lower 00:14:03.877 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:03.877 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:03.877 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:03.877 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:03.877 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:03.877 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:03.877 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:03.877 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:03.877 Waiting for all controllers to trigger AER and reset threshold 00:14:03.877 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:03.877 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:03.877 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:03.877 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:03.877 Cleaning up... 00:14:03.877 ************************************ 00:14:03.877 END TEST nvme_single_aen 00:14:03.877 ************************************ 00:14:03.877 00:14:03.877 real 0m0.299s 00:14:03.877 user 0m0.116s 00:14:03.877 sys 0m0.136s 00:14:03.877 13:08:50 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.877 13:08:50 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 13:08:50 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:14:03.877 13:08:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:03.877 13:08:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.877 13:08:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 ************************************ 00:14:03.877 START TEST nvme_doorbell_aers 00:14:03.877 ************************************ 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:03.877 13:08:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:04.135 [2024-12-06 13:08:51.141476] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:14.093 Executing: test_write_invalid_db 00:14:14.093 Waiting for AER completion... 00:14:14.093 Failure: test_write_invalid_db 00:14:14.093 00:14:14.093 Executing: test_invalid_db_write_overflow_sq 00:14:14.093 Waiting for AER completion... 00:14:14.093 Failure: test_invalid_db_write_overflow_sq 00:14:14.093 00:14:14.093 Executing: test_invalid_db_write_overflow_cq 00:14:14.093 Waiting for AER completion... 00:14:14.093 Failure: test_invalid_db_write_overflow_cq 00:14:14.094 00:14:14.094 13:09:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:14.094 13:09:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:14.351 [2024-12-06 13:09:01.201621] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:24.322 Executing: test_write_invalid_db 00:14:24.322 Waiting for AER completion... 00:14:24.323 Failure: test_write_invalid_db 00:14:24.323 00:14:24.323 Executing: test_invalid_db_write_overflow_sq 00:14:24.323 Waiting for AER completion... 00:14:24.323 Failure: test_invalid_db_write_overflow_sq 00:14:24.323 00:14:24.323 Executing: test_invalid_db_write_overflow_cq 00:14:24.323 Waiting for AER completion... 00:14:24.323 Failure: test_invalid_db_write_overflow_cq 00:14:24.323 00:14:24.323 13:09:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:24.323 13:09:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:24.323 [2024-12-06 13:09:11.204031] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:34.287 Executing: test_write_invalid_db 00:14:34.287 Waiting for AER completion... 00:14:34.287 Failure: test_write_invalid_db 00:14:34.287 00:14:34.287 Executing: test_invalid_db_write_overflow_sq 00:14:34.287 Waiting for AER completion... 00:14:34.287 Failure: test_invalid_db_write_overflow_sq 00:14:34.287 00:14:34.287 Executing: test_invalid_db_write_overflow_cq 00:14:34.287 Waiting for AER completion... 00:14:34.287 Failure: test_invalid_db_write_overflow_cq 00:14:34.287 00:14:34.287 13:09:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:34.287 13:09:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:34.287 [2024-12-06 13:09:21.289471] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.316 Executing: test_write_invalid_db 00:14:44.316 Waiting for AER completion... 00:14:44.316 Failure: test_write_invalid_db 00:14:44.316 00:14:44.316 Executing: test_invalid_db_write_overflow_sq 00:14:44.316 Waiting for AER completion... 00:14:44.316 Failure: test_invalid_db_write_overflow_sq 00:14:44.316 00:14:44.316 Executing: test_invalid_db_write_overflow_cq 00:14:44.316 Waiting for AER completion... 00:14:44.316 Failure: test_invalid_db_write_overflow_cq 00:14:44.316 00:14:44.316 ************************************ 00:14:44.316 END TEST nvme_doorbell_aers 00:14:44.316 ************************************ 00:14:44.316 00:14:44.316 real 0m40.266s 00:14:44.316 user 0m34.129s 00:14:44.316 sys 0m5.733s 00:14:44.316 13:09:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.316 13:09:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:14:44.316 13:09:31 nvme -- nvme/nvme.sh@97 -- # uname 00:14:44.316 13:09:31 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:14:44.316 13:09:31 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:44.316 13:09:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:44.316 13:09:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.316 13:09:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.316 ************************************ 00:14:44.316 START TEST nvme_multi_aen 00:14:44.316 ************************************ 00:14:44.316 13:09:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:44.316 [2024-12-06 13:09:31.328625] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.316 [2024-12-06 13:09:31.328775] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.316 [2024-12-06 13:09:31.328803] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.330959] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.331024] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.331047] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.332609] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.332666] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.332688] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.334253] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.334310] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 [2024-12-06 13:09:31.334336] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64725) is not found. Dropping the request. 00:14:44.581 Child process pid: 65247 00:14:44.581 [Child] Asynchronous Event Request test 00:14:44.581 [Child] Attached to 0000:00:10.0 00:14:44.581 [Child] Attached to 0000:00:11.0 00:14:44.581 [Child] Attached to 0000:00:13.0 00:14:44.581 [Child] Attached to 0000:00:12.0 00:14:44.581 [Child] Registering asynchronous event callbacks... 00:14:44.581 [Child] Getting orig temperature thresholds of all controllers 00:14:44.581 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.581 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.581 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.581 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.581 [Child] Waiting for all controllers to trigger AER and reset threshold 00:14:44.581 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.581 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.581 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.581 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.581 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.581 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.581 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.582 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.582 [Child] Cleaning up... 00:14:44.839 Asynchronous Event Request test 00:14:44.839 Attached to 0000:00:10.0 00:14:44.839 Attached to 0000:00:11.0 00:14:44.839 Attached to 0000:00:13.0 00:14:44.839 Attached to 0000:00:12.0 00:14:44.839 Reset controller to setup AER completions for this process 00:14:44.839 Registering asynchronous event callbacks... 00:14:44.839 Getting orig temperature thresholds of all controllers 00:14:44.839 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.839 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.839 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.839 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:44.839 Setting all controllers temperature threshold low to trigger AER 00:14:44.839 Waiting for all controllers temperature threshold to be set lower 00:14:44.839 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.839 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:44.840 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.840 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:44.840 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.840 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:44.840 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:44.840 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:44.840 Waiting for all controllers to trigger AER and reset threshold 00:14:44.840 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.840 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.840 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.840 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:44.840 Cleaning up... 00:14:44.840 ************************************ 00:14:44.840 END TEST nvme_multi_aen 00:14:44.840 ************************************ 00:14:44.840 00:14:44.840 real 0m0.574s 00:14:44.840 user 0m0.207s 00:14:44.840 sys 0m0.277s 00:14:44.840 13:09:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.840 13:09:31 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:14:44.840 13:09:31 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:44.840 13:09:31 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:44.840 13:09:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.840 13:09:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.840 ************************************ 00:14:44.840 START TEST nvme_startup 00:14:44.840 ************************************ 00:14:44.840 13:09:31 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:45.098 Initializing NVMe Controllers 00:14:45.098 Attached to 0000:00:10.0 00:14:45.098 Attached to 0000:00:11.0 00:14:45.098 Attached to 0000:00:13.0 00:14:45.098 Attached to 0000:00:12.0 00:14:45.098 Initialization complete. 00:14:45.098 Time used:239426.422 (us). 00:14:45.098 ************************************ 00:14:45.098 END TEST nvme_startup 00:14:45.098 ************************************ 00:14:45.098 00:14:45.098 real 0m0.336s 00:14:45.098 user 0m0.112s 00:14:45.098 sys 0m0.171s 00:14:45.098 13:09:32 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.098 13:09:32 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:14:45.098 13:09:32 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:14:45.098 13:09:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:45.098 13:09:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.098 13:09:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.098 ************************************ 00:14:45.098 START TEST nvme_multi_secondary 00:14:45.098 ************************************ 00:14:45.098 13:09:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:14:45.098 13:09:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65303 00:14:45.098 13:09:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:14:45.098 13:09:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65304 00:14:45.098 13:09:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:14:45.098 13:09:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:49.284 Initializing NVMe Controllers 00:14:49.284 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:49.284 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:49.284 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:49.284 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:49.284 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:49.284 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:49.284 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:49.284 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:49.284 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:49.284 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:49.284 Initialization complete. Launching workers. 00:14:49.284 ======================================================== 00:14:49.284 Latency(us) 00:14:49.284 Device Information : IOPS MiB/s Average min max 00:14:49.284 PCIE (0000:00:10.0) NSID 1 from core 2: 2576.03 10.06 6207.16 1829.97 12836.83 00:14:49.284 PCIE (0000:00:11.0) NSID 1 from core 2: 2576.03 10.06 6210.12 1683.57 13148.59 00:14:49.284 PCIE (0000:00:13.0) NSID 1 from core 2: 2576.03 10.06 6218.30 1786.37 13252.24 00:14:49.284 PCIE (0000:00:12.0) NSID 1 from core 2: 2576.03 10.06 6222.65 1775.75 13018.67 00:14:49.284 PCIE (0000:00:12.0) NSID 2 from core 2: 2576.03 10.06 6224.06 1786.65 12815.56 00:14:49.284 PCIE (0000:00:12.0) NSID 3 from core 2: 2576.03 10.06 6224.76 1595.62 12744.73 00:14:49.284 ======================================================== 00:14:49.284 Total : 15456.21 60.38 6217.84 1595.62 13252.24 00:14:49.284 00:14:49.284 13:09:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65303 00:14:49.284 Initializing NVMe Controllers 00:14:49.284 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:49.284 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:49.284 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:49.284 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:49.284 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:49.284 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:49.284 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:49.284 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:49.284 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:49.284 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:49.284 Initialization complete. Launching workers. 00:14:49.284 ======================================================== 00:14:49.284 Latency(us) 00:14:49.284 Device Information : IOPS MiB/s Average min max 00:14:49.284 PCIE (0000:00:10.0) NSID 1 from core 1: 5490.69 21.45 2911.97 1224.31 7083.28 00:14:49.284 PCIE (0000:00:11.0) NSID 1 from core 1: 5490.69 21.45 2913.54 1234.76 7168.53 00:14:49.284 PCIE (0000:00:13.0) NSID 1 from core 1: 5490.69 21.45 2913.55 1303.59 7371.30 00:14:49.284 PCIE (0000:00:12.0) NSID 1 from core 1: 5490.69 21.45 2913.42 1286.73 7206.10 00:14:49.284 PCIE (0000:00:12.0) NSID 2 from core 1: 5490.69 21.45 2913.36 1271.32 7163.97 00:14:49.284 PCIE (0000:00:12.0) NSID 3 from core 1: 5490.69 21.45 2913.29 1249.17 7235.09 00:14:49.284 ======================================================== 00:14:49.284 Total : 32944.16 128.69 2913.19 1224.31 7371.30 00:14:49.284 00:14:50.655 Initializing NVMe Controllers 00:14:50.655 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.655 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:50.655 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:50.655 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:50.655 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:50.655 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:50.655 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:50.655 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:50.655 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:50.655 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:50.655 Initialization complete. Launching workers. 00:14:50.655 ======================================================== 00:14:50.655 Latency(us) 00:14:50.655 Device Information : IOPS MiB/s Average min max 00:14:50.655 PCIE (0000:00:10.0) NSID 1 from core 0: 8469.04 33.08 1887.60 933.30 8221.86 00:14:50.655 PCIE (0000:00:11.0) NSID 1 from core 0: 8469.04 33.08 1888.75 919.62 8306.03 00:14:50.655 PCIE (0000:00:13.0) NSID 1 from core 0: 8469.04 33.08 1888.71 958.71 8249.63 00:14:50.655 PCIE (0000:00:12.0) NSID 1 from core 0: 8472.24 33.09 1887.94 965.64 7960.13 00:14:50.655 PCIE (0000:00:12.0) NSID 2 from core 0: 8472.24 33.09 1887.90 941.56 8753.04 00:14:50.655 PCIE (0000:00:12.0) NSID 3 from core 0: 8472.24 33.09 1887.86 930.18 8285.27 00:14:50.655 ======================================================== 00:14:50.655 Total : 50823.83 198.53 1888.13 919.62 8753.04 00:14:50.655 00:14:50.655 13:09:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65304 00:14:50.655 13:09:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65373 00:14:50.655 13:09:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:14:50.655 13:09:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65374 00:14:50.655 13:09:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:14:50.655 13:09:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:53.931 Initializing NVMe Controllers 00:14:53.931 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:53.931 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:53.931 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:53.931 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:53.931 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:53.931 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:53.931 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:53.931 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:53.931 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:53.931 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:53.931 Initialization complete. Launching workers. 00:14:53.931 ======================================================== 00:14:53.931 Latency(us) 00:14:53.931 Device Information : IOPS MiB/s Average min max 00:14:53.931 PCIE (0000:00:10.0) NSID 1 from core 0: 5645.03 22.05 2832.42 1056.35 7182.67 00:14:53.931 PCIE (0000:00:11.0) NSID 1 from core 0: 5645.03 22.05 2834.32 1079.98 7112.57 00:14:53.931 PCIE (0000:00:13.0) NSID 1 from core 0: 5645.03 22.05 2834.16 1101.70 7162.52 00:14:53.931 PCIE (0000:00:12.0) NSID 1 from core 0: 5650.36 22.07 2831.92 1131.84 6870.69 00:14:53.931 PCIE (0000:00:12.0) NSID 2 from core 0: 5650.36 22.07 2831.77 1134.52 6575.19 00:14:53.931 PCIE (0000:00:12.0) NSID 3 from core 0: 5650.36 22.07 2831.66 1131.81 7083.05 00:14:53.931 ======================================================== 00:14:53.931 Total : 33886.15 132.37 2832.71 1056.35 7182.67 00:14:53.931 00:14:54.188 Initializing NVMe Controllers 00:14:54.188 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:54.188 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:54.188 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:54.188 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:54.188 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:54.188 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:54.188 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:54.188 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:54.188 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:54.188 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:54.188 Initialization complete. Launching workers. 00:14:54.188 ======================================================== 00:14:54.188 Latency(us) 00:14:54.188 Device Information : IOPS MiB/s Average min max 00:14:54.188 PCIE (0000:00:10.0) NSID 1 from core 1: 5237.25 20.46 3052.91 1064.18 6432.15 00:14:54.188 PCIE (0000:00:11.0) NSID 1 from core 1: 5237.25 20.46 3054.26 1084.57 6179.81 00:14:54.188 PCIE (0000:00:13.0) NSID 1 from core 1: 5237.25 20.46 3054.10 1077.28 6593.92 00:14:54.188 PCIE (0000:00:12.0) NSID 1 from core 1: 5237.25 20.46 3053.97 1074.77 6404.97 00:14:54.188 PCIE (0000:00:12.0) NSID 2 from core 1: 5237.25 20.46 3053.82 973.98 6630.63 00:14:54.188 PCIE (0000:00:12.0) NSID 3 from core 1: 5237.25 20.46 3053.65 910.75 6447.05 00:14:54.188 ======================================================== 00:14:54.188 Total : 31423.52 122.75 3053.78 910.75 6630.63 00:14:54.188 00:14:56.088 Initializing NVMe Controllers 00:14:56.088 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:56.088 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:56.088 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:56.088 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:56.088 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:56.088 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:56.088 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:56.088 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:56.088 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:56.088 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:56.088 Initialization complete. Launching workers. 00:14:56.088 ======================================================== 00:14:56.088 Latency(us) 00:14:56.088 Device Information : IOPS MiB/s Average min max 00:14:56.088 PCIE (0000:00:10.0) NSID 1 from core 2: 3474.43 13.57 4602.73 1024.37 14497.17 00:14:56.088 PCIE (0000:00:11.0) NSID 1 from core 2: 3474.43 13.57 4603.91 1043.60 14300.91 00:14:56.088 PCIE (0000:00:13.0) NSID 1 from core 2: 3474.43 13.57 4600.34 998.98 14271.25 00:14:56.088 PCIE (0000:00:12.0) NSID 1 from core 2: 3474.43 13.57 4600.66 1015.69 16868.15 00:14:56.088 PCIE (0000:00:12.0) NSID 2 from core 2: 3474.43 13.57 4600.08 1020.89 16992.58 00:14:56.088 PCIE (0000:00:12.0) NSID 3 from core 2: 3474.43 13.57 4600.44 963.21 16251.42 00:14:56.088 ======================================================== 00:14:56.088 Total : 20846.57 81.43 4601.36 963.21 16992.58 00:14:56.088 00:14:56.088 ************************************ 00:14:56.088 END TEST nvme_multi_secondary 00:14:56.088 ************************************ 00:14:56.088 13:09:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65373 00:14:56.088 13:09:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65374 00:14:56.088 00:14:56.088 real 0m10.863s 00:14:56.088 user 0m18.701s 00:14:56.088 sys 0m1.039s 00:14:56.088 13:09:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.088 13:09:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:56.088 13:09:42 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:56.088 13:09:42 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:56.088 13:09:42 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64299 ]] 00:14:56.088 13:09:42 nvme -- common/autotest_common.sh@1094 -- # kill 64299 00:14:56.088 13:09:42 nvme -- common/autotest_common.sh@1095 -- # wait 64299 00:14:56.088 [2024-12-06 13:09:42.990536] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.990674] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.990745] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.990814] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.994190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.994269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.994298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.994336] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.996935] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.997009] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.997037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.997065] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.999719] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.999791] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.999819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.088 [2024-12-06 13:09:42.999846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65246) is not found. Dropping the request. 00:14:56.346 13:09:43 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:14:56.346 13:09:43 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:14:56.346 13:09:43 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:56.346 13:09:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:56.346 13:09:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.346 13:09:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.346 ************************************ 00:14:56.346 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:56.346 ************************************ 00:14:56.346 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:56.346 * Looking for test storage... 00:14:56.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:56.346 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:56.346 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:56.346 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.605 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:56.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.606 --rc genhtml_branch_coverage=1 00:14:56.606 --rc genhtml_function_coverage=1 00:14:56.606 --rc genhtml_legend=1 00:14:56.606 --rc geninfo_all_blocks=1 00:14:56.606 --rc geninfo_unexecuted_blocks=1 00:14:56.606 00:14:56.606 ' 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:56.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.606 --rc genhtml_branch_coverage=1 00:14:56.606 --rc genhtml_function_coverage=1 00:14:56.606 --rc genhtml_legend=1 00:14:56.606 --rc geninfo_all_blocks=1 00:14:56.606 --rc geninfo_unexecuted_blocks=1 00:14:56.606 00:14:56.606 ' 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:56.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.606 --rc genhtml_branch_coverage=1 00:14:56.606 --rc genhtml_function_coverage=1 00:14:56.606 --rc genhtml_legend=1 00:14:56.606 --rc geninfo_all_blocks=1 00:14:56.606 --rc geninfo_unexecuted_blocks=1 00:14:56.606 00:14:56.606 ' 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:56.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.606 --rc genhtml_branch_coverage=1 00:14:56.606 --rc genhtml_function_coverage=1 00:14:56.606 --rc genhtml_legend=1 00:14:56.606 --rc geninfo_all_blocks=1 00:14:56.606 --rc geninfo_unexecuted_blocks=1 00:14:56.606 00:14:56.606 ' 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65536 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65536 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65536 ']' 00:14:56.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.606 13:09:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:56.606 [2024-12-06 13:09:43.575970] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:56.606 [2024-12-06 13:09:43.576153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65536 ] 00:14:56.865 [2024-12-06 13:09:43.774726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.123 [2024-12-06 13:09:43.936801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.123 [2024-12-06 13:09:43.936949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.123 [2024-12-06 13:09:43.937079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.123 [2024-12-06 13:09:43.937098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:58.057 nvme0n1 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_0pJPv.txt 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:58.057 true 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733490584 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65559 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:58.057 13:09:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:59.957 [2024-12-06 13:09:46.938157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:14:59.957 [2024-12-06 13:09:46.938626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:59.957 [2024-12-06 13:09:46.938702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.957 [2024-12-06 13:09:46.938723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.957 [2024-12-06 13:09:46.940753] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:14:59.957 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65559 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65559 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65559 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.957 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:00.216 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.216 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:15:00.216 13:09:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_0pJPv.txt 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_0pJPv.txt 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65536 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65536 ']' 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65536 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65536 00:15:00.216 killing process with pid 65536 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65536' 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65536 00:15:00.216 13:09:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65536 00:15:02.747 13:09:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:15:02.747 13:09:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:15:02.747 00:15:02.747 real 0m6.157s 00:15:02.747 user 0m21.639s 00:15:02.747 sys 0m0.755s 00:15:02.747 13:09:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.747 ************************************ 00:15:02.747 13:09:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 END TEST bdev_nvme_reset_stuck_adm_cmd 00:15:02.747 ************************************ 00:15:02.747 13:09:49 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:15:02.747 13:09:49 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:15:02.747 13:09:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:02.747 13:09:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.747 13:09:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 ************************************ 00:15:02.747 START TEST nvme_fio 00:15:02.747 ************************************ 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:02.747 13:09:49 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:02.747 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:03.006 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:03.006 13:09:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:03.264 13:09:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:03.265 13:09:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:03.265 13:09:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:03.523 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:03.523 fio-3.35 00:15:03.523 Starting 1 thread 00:15:06.803 00:15:06.803 test: (groupid=0, jobs=1): err= 0: pid=65717: Fri Dec 6 13:09:53 2024 00:15:06.803 read: IOPS=16.1k, BW=63.0MiB/s (66.1MB/s)(126MiB/2001msec) 00:15:06.803 slat (nsec): min=4590, max=60764, avg=6583.10, stdev=2162.86 00:15:06.803 clat (usec): min=257, max=10408, avg=3940.68, stdev=633.34 00:15:06.803 lat (usec): min=263, max=10469, avg=3947.27, stdev=634.21 00:15:06.803 clat percentiles (usec): 00:15:06.803 | 1.00th=[ 2900], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3556], 00:15:06.803 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3884], 00:15:06.803 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4686], 00:15:06.803 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 8291], 99.95th=[ 8455], 00:15:06.803 | 99.99th=[10290] 00:15:06.803 bw ( KiB/s): min=56696, max=72952, per=97.91%, avg=63202.67, stdev=8599.45, samples=3 00:15:06.803 iops : min=14174, max=18238, avg=15800.67, stdev=2149.86, samples=3 00:15:06.803 write: IOPS=16.2k, BW=63.2MiB/s (66.2MB/s)(126MiB/2001msec); 0 zone resets 00:15:06.803 slat (nsec): min=4777, max=43957, avg=6698.06, stdev=2093.74 00:15:06.803 clat (usec): min=290, max=10248, avg=3954.87, stdev=646.15 00:15:06.803 lat (usec): min=295, max=10262, avg=3961.57, stdev=646.98 00:15:06.803 clat percentiles (usec): 00:15:06.803 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3556], 00:15:06.803 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3884], 00:15:06.803 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:15:06.803 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 8356], 99.95th=[ 8455], 00:15:06.803 | 99.99th=[10159] 00:15:06.803 bw ( KiB/s): min=57024, max=72520, per=97.28%, avg=62909.33, stdev=8392.86, samples=3 00:15:06.803 iops : min=14256, max=18130, avg=15727.33, stdev=2098.21, samples=3 00:15:06.803 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:15:06.803 lat (msec) : 2=0.06%, 4=62.83%, 10=37.05%, 20=0.02% 00:15:06.803 cpu : usr=99.00%, sys=0.10%, ctx=2, majf=0, minf=609 00:15:06.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:06.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.803 issued rwts: total=32293,32350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.803 00:15:06.803 Run status group 0 (all jobs): 00:15:06.803 READ: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=126MiB (132MB), run=2001-2001msec 00:15:06.803 WRITE: bw=63.2MiB/s (66.2MB/s), 63.2MiB/s-63.2MiB/s (66.2MB/s-66.2MB/s), io=126MiB (133MB), run=2001-2001msec 00:15:06.803 ----------------------------------------------------- 00:15:06.803 Suppressions used: 00:15:06.803 count bytes template 00:15:06.803 1 32 /usr/src/fio/parse.c 00:15:06.803 1 8 libtcmalloc_minimal.so 00:15:06.803 ----------------------------------------------------- 00:15:06.803 00:15:06.803 13:09:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:06.803 13:09:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:06.803 13:09:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:06.803 13:09:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:07.061 13:09:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:07.061 13:09:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:07.344 13:09:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:07.344 13:09:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:07.344 13:09:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:07.631 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:07.631 fio-3.35 00:15:07.631 Starting 1 thread 00:15:10.917 00:15:10.917 test: (groupid=0, jobs=1): err= 0: pid=65778: Fri Dec 6 13:09:57 2024 00:15:10.917 read: IOPS=16.8k, BW=65.5MiB/s (68.7MB/s)(131MiB/2001msec) 00:15:10.917 slat (nsec): min=4695, max=63772, avg=6361.60, stdev=1754.65 00:15:10.917 clat (usec): min=269, max=9155, avg=3795.77, stdev=453.57 00:15:10.917 lat (usec): min=275, max=9219, avg=3802.14, stdev=454.18 00:15:10.917 clat percentiles (usec): 00:15:10.917 | 1.00th=[ 3163], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3523], 00:15:10.917 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:15:10.917 | 70.00th=[ 3785], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:15:10.917 | 99.00th=[ 4948], 99.50th=[ 5669], 99.90th=[ 6980], 99.95th=[ 8029], 00:15:10.917 | 99.99th=[ 9110] 00:15:10.917 bw ( KiB/s): min=62632, max=67912, per=98.45%, avg=66040.00, stdev=2956.19, samples=3 00:15:10.917 iops : min=15658, max=16978, avg=16510.00, stdev=739.05, samples=3 00:15:10.917 write: IOPS=16.8k, BW=65.6MiB/s (68.8MB/s)(131MiB/2001msec); 0 zone resets 00:15:10.917 slat (nsec): min=4716, max=52450, avg=6399.68, stdev=1741.46 00:15:10.917 clat (usec): min=436, max=9086, avg=3799.55, stdev=446.16 00:15:10.917 lat (usec): min=449, max=9103, avg=3805.95, stdev=446.76 00:15:10.917 clat percentiles (usec): 00:15:10.917 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3523], 00:15:10.917 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:15:10.917 | 70.00th=[ 3785], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:15:10.917 | 99.00th=[ 4883], 99.50th=[ 5669], 99.90th=[ 7046], 99.95th=[ 8094], 00:15:10.917 | 99.99th=[ 8979] 00:15:10.917 bw ( KiB/s): min=62272, max=68072, per=98.08%, avg=65922.67, stdev=3178.13, samples=3 00:15:10.917 iops : min=15568, max=17018, avg=16480.67, stdev=794.53, samples=3 00:15:10.917 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:15:10.917 lat (msec) : 2=0.05%, 4=77.14%, 10=22.76% 00:15:10.917 cpu : usr=99.00%, sys=0.15%, ctx=3, majf=0, minf=608 00:15:10.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:10.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.917 issued rwts: total=33558,33622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.917 00:15:10.917 Run status group 0 (all jobs): 00:15:10.917 READ: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=131MiB (137MB), run=2001-2001msec 00:15:10.917 WRITE: bw=65.6MiB/s (68.8MB/s), 65.6MiB/s-65.6MiB/s (68.8MB/s-68.8MB/s), io=131MiB (138MB), run=2001-2001msec 00:15:10.917 ----------------------------------------------------- 00:15:10.917 Suppressions used: 00:15:10.917 count bytes template 00:15:10.917 1 32 /usr/src/fio/parse.c 00:15:10.917 1 8 libtcmalloc_minimal.so 00:15:10.917 ----------------------------------------------------- 00:15:10.917 00:15:10.917 13:09:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:10.917 13:09:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:10.917 13:09:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:10.917 13:09:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:11.175 13:09:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:11.175 13:09:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:11.434 13:09:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:11.434 13:09:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:11.434 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:11.692 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:11.692 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:11.693 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:11.693 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:11.693 13:09:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:11.693 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:11.693 fio-3.35 00:15:11.693 Starting 1 thread 00:15:15.879 00:15:15.879 test: (groupid=0, jobs=1): err= 0: pid=65839: Fri Dec 6 13:10:02 2024 00:15:15.879 read: IOPS=16.1k, BW=62.9MiB/s (66.0MB/s)(126MiB/2001msec) 00:15:15.879 slat (nsec): min=4618, max=70874, avg=6454.26, stdev=1990.25 00:15:15.879 clat (usec): min=430, max=9232, avg=3949.29, stdev=678.82 00:15:15.879 lat (usec): min=437, max=9285, avg=3955.75, stdev=679.73 00:15:15.879 clat percentiles (usec): 00:15:15.879 | 1.00th=[ 3195], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3458], 00:15:15.879 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 4080], 00:15:15.879 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4883], 00:15:15.879 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 7570], 99.95th=[ 7767], 00:15:15.879 | 99.99th=[ 9110] 00:15:15.879 bw ( KiB/s): min=59000, max=69880, per=98.93%, avg=63763.67, stdev=5564.70, samples=3 00:15:15.879 iops : min=14750, max=17470, avg=15940.67, stdev=1391.27, samples=3 00:15:15.879 write: IOPS=16.1k, BW=63.1MiB/s (66.1MB/s)(126MiB/2001msec); 0 zone resets 00:15:15.879 slat (nsec): min=4819, max=44975, avg=6605.24, stdev=1905.69 00:15:15.879 clat (usec): min=297, max=9103, avg=3958.69, stdev=675.57 00:15:15.879 lat (usec): min=304, max=9115, avg=3965.30, stdev=676.47 00:15:15.879 clat percentiles (usec): 00:15:15.879 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:15:15.879 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 4113], 00:15:15.879 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4948], 00:15:15.879 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 7635], 99.95th=[ 7767], 00:15:15.879 | 99.99th=[ 8848] 00:15:15.879 bw ( KiB/s): min=59384, max=69216, per=98.30%, avg=63489.33, stdev=5112.59, samples=3 00:15:15.879 iops : min=14846, max=17304, avg=15872.33, stdev=1278.15, samples=3 00:15:15.880 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:15.880 lat (msec) : 2=0.04%, 4=55.59%, 10=44.33% 00:15:15.880 cpu : usr=98.95%, sys=0.15%, ctx=3, majf=0, minf=608 00:15:15.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:15.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.880 issued rwts: total=32244,32309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.880 00:15:15.880 Run status group 0 (all jobs): 00:15:15.880 READ: bw=62.9MiB/s (66.0MB/s), 62.9MiB/s-62.9MiB/s (66.0MB/s-66.0MB/s), io=126MiB (132MB), run=2001-2001msec 00:15:15.880 WRITE: bw=63.1MiB/s (66.1MB/s), 63.1MiB/s-63.1MiB/s (66.1MB/s-66.1MB/s), io=126MiB (132MB), run=2001-2001msec 00:15:15.880 ----------------------------------------------------- 00:15:15.880 Suppressions used: 00:15:15.880 count bytes template 00:15:15.880 1 32 /usr/src/fio/parse.c 00:15:15.880 1 8 libtcmalloc_minimal.so 00:15:15.880 ----------------------------------------------------- 00:15:15.880 00:15:15.880 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:15.880 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:15.880 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:15.880 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:15.880 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:15.880 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:16.137 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:16.137 13:10:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:16.137 13:10:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:16.137 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:16.137 fio-3.35 00:15:16.137 Starting 1 thread 00:15:19.523 00:15:19.523 test: (groupid=0, jobs=1): err= 0: pid=65906: Fri Dec 6 13:10:06 2024 00:15:19.523 read: IOPS=15.6k, BW=60.8MiB/s (63.7MB/s)(122MiB/2001msec) 00:15:19.523 slat (nsec): min=4674, max=48336, avg=6671.16, stdev=1929.30 00:15:19.523 clat (usec): min=272, max=11095, avg=4091.53, stdev=496.25 00:15:19.523 lat (usec): min=278, max=11102, avg=4098.20, stdev=496.94 00:15:19.523 clat percentiles (usec): 00:15:19.523 | 1.00th=[ 3359], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3654], 00:15:19.523 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 4228], 60.00th=[ 4293], 00:15:19.523 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:15:19.523 | 99.00th=[ 5538], 99.50th=[ 6718], 99.90th=[ 7504], 99.95th=[ 7635], 00:15:19.523 | 99.99th=[ 7963] 00:15:19.523 bw ( KiB/s): min=59888, max=64720, per=100.00%, avg=62512.00, stdev=2442.71, samples=3 00:15:19.523 iops : min=14972, max=16180, avg=15628.00, stdev=610.68, samples=3 00:15:19.523 write: IOPS=15.6k, BW=60.8MiB/s (63.7MB/s)(122MiB/2001msec); 0 zone resets 00:15:19.523 slat (nsec): min=4850, max=75009, avg=6783.42, stdev=1933.63 00:15:19.523 clat (usec): min=238, max=7932, avg=4104.41, stdev=505.74 00:15:19.523 lat (usec): min=244, max=7954, avg=4111.19, stdev=506.45 00:15:19.523 clat percentiles (usec): 00:15:19.523 | 1.00th=[ 3392], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:15:19.523 | 30.00th=[ 3720], 40.00th=[ 3851], 50.00th=[ 4228], 60.00th=[ 4359], 00:15:19.523 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4621], 00:15:19.523 | 99.00th=[ 5866], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 7570], 00:15:19.523 | 99.99th=[ 7701] 00:15:19.523 bw ( KiB/s): min=58816, max=64064, per=99.68%, avg=62050.67, stdev=2829.15, samples=3 00:15:19.523 iops : min=14704, max=16016, avg=15512.67, stdev=707.29, samples=3 00:15:19.523 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:15:19.523 lat (msec) : 2=0.05%, 4=45.41%, 10=54.50%, 20=0.01% 00:15:19.523 cpu : usr=98.85%, sys=0.20%, ctx=3, majf=0, minf=606 00:15:19.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:19.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.523 issued rwts: total=31128,31141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.523 00:15:19.523 Run status group 0 (all jobs): 00:15:19.523 READ: bw=60.8MiB/s (63.7MB/s), 60.8MiB/s-60.8MiB/s (63.7MB/s-63.7MB/s), io=122MiB (128MB), run=2001-2001msec 00:15:19.523 WRITE: bw=60.8MiB/s (63.7MB/s), 60.8MiB/s-60.8MiB/s (63.7MB/s-63.7MB/s), io=122MiB (128MB), run=2001-2001msec 00:15:19.523 ----------------------------------------------------- 00:15:19.523 Suppressions used: 00:15:19.523 count bytes template 00:15:19.523 1 32 /usr/src/fio/parse.c 00:15:19.523 1 8 libtcmalloc_minimal.so 00:15:19.523 ----------------------------------------------------- 00:15:19.523 00:15:19.523 13:10:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:19.523 13:10:06 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:19.523 00:15:19.523 real 0m16.963s 00:15:19.523 user 0m13.265s 00:15:19.523 sys 0m1.870s 00:15:19.523 13:10:06 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.523 13:10:06 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:19.523 ************************************ 00:15:19.523 END TEST nvme_fio 00:15:19.523 ************************************ 00:15:19.523 ************************************ 00:15:19.523 END TEST nvme 00:15:19.523 ************************************ 00:15:19.523 00:15:19.523 real 1m31.677s 00:15:19.523 user 3m46.875s 00:15:19.523 sys 0m15.036s 00:15:19.523 13:10:06 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.523 13:10:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.523 13:10:06 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:19.523 13:10:06 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:19.523 13:10:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:19.523 13:10:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.523 13:10:06 -- common/autotest_common.sh@10 -- # set +x 00:15:19.523 ************************************ 00:15:19.523 START TEST nvme_scc 00:15:19.523 ************************************ 00:15:19.523 13:10:06 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:19.781 * Looking for test storage... 00:15:19.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.781 --rc genhtml_branch_coverage=1 00:15:19.781 --rc genhtml_function_coverage=1 00:15:19.781 --rc genhtml_legend=1 00:15:19.781 --rc geninfo_all_blocks=1 00:15:19.781 --rc geninfo_unexecuted_blocks=1 00:15:19.781 00:15:19.781 ' 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.781 --rc genhtml_branch_coverage=1 00:15:19.781 --rc genhtml_function_coverage=1 00:15:19.781 --rc genhtml_legend=1 00:15:19.781 --rc geninfo_all_blocks=1 00:15:19.781 --rc geninfo_unexecuted_blocks=1 00:15:19.781 00:15:19.781 ' 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.781 --rc genhtml_branch_coverage=1 00:15:19.781 --rc genhtml_function_coverage=1 00:15:19.781 --rc genhtml_legend=1 00:15:19.781 --rc geninfo_all_blocks=1 00:15:19.781 --rc geninfo_unexecuted_blocks=1 00:15:19.781 00:15:19.781 ' 00:15:19.781 13:10:06 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:19.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.781 --rc genhtml_branch_coverage=1 00:15:19.781 --rc genhtml_function_coverage=1 00:15:19.781 --rc genhtml_legend=1 00:15:19.781 --rc geninfo_all_blocks=1 00:15:19.781 --rc geninfo_unexecuted_blocks=1 00:15:19.781 00:15:19.781 ' 00:15:19.781 13:10:06 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.781 13:10:06 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.781 13:10:06 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.781 13:10:06 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.781 13:10:06 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.781 13:10:06 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:19.781 13:10:06 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:19.781 13:10:06 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:19.781 13:10:06 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.781 13:10:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:19.781 13:10:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:19.781 13:10:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:19.781 13:10:06 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:20.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.296 Waiting for block devices as requested 00:15:20.296 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:20.552 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:20.553 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:20.553 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.816 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:25.816 13:10:12 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:25.816 13:10:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:25.816 13:10:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:25.816 13:10:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:25.816 13:10:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.816 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:25.817 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.818 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:25.819 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.820 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:25.821 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.822 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:25.823 13:10:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:25.823 13:10:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:25.823 13:10:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:25.823 13:10:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.823 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:25.824 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:26.088 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:26.089 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.090 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.091 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:26.092 13:10:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:26.092 13:10:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:26.092 13:10:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:26.092 13:10:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:26.092 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:26.093 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.094 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.095 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.096 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.097 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.098 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.360 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.361 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.362 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.363 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.364 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:26.365 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:26.366 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.367 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:26.368 13:10:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:26.368 13:10:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:26.368 13:10:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:26.368 13:10:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.368 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:26.369 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.370 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:26.371 13:10:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:15:26.371 13:10:13 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:15:26.371 13:10:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:15:26.371 13:10:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:15:26.371 13:10:13 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:26.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:27.502 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:27.502 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:27.502 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:27.760 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:27.760 13:10:14 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:27.760 13:10:14 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:27.760 13:10:14 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.760 13:10:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:27.760 ************************************ 00:15:27.760 START TEST nvme_simple_copy 00:15:27.760 ************************************ 00:15:27.760 13:10:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:28.018 Initializing NVMe Controllers 00:15:28.018 Attaching to 0000:00:10.0 00:15:28.018 Controller supports SCC. Attached to 0000:00:10.0 00:15:28.018 Namespace ID: 1 size: 6GB 00:15:28.018 Initialization complete. 00:15:28.018 00:15:28.018 Controller QEMU NVMe Ctrl (12340 ) 00:15:28.018 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:15:28.018 Namespace Block Size:4096 00:15:28.018 Writing LBAs 0 to 63 with Random Data 00:15:28.018 Copied LBAs from 0 - 63 to the Destination LBA 256 00:15:28.018 LBAs matching Written Data: 64 00:15:28.018 00:15:28.018 real 0m0.329s 00:15:28.018 user 0m0.147s 00:15:28.018 sys 0m0.080s 00:15:28.018 13:10:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.018 13:10:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:15:28.018 ************************************ 00:15:28.018 END TEST nvme_simple_copy 00:15:28.018 ************************************ 00:15:28.018 00:15:28.018 real 0m8.507s 00:15:28.018 user 0m1.586s 00:15:28.018 sys 0m1.794s 00:15:28.018 13:10:14 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.018 13:10:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:28.018 ************************************ 00:15:28.018 END TEST nvme_scc 00:15:28.018 ************************************ 00:15:28.018 13:10:15 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:15:28.018 13:10:15 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:15:28.018 13:10:15 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:15:28.018 13:10:15 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:15:28.018 13:10:15 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:15:28.018 13:10:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.018 13:10:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.018 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:15:28.018 ************************************ 00:15:28.018 START TEST nvme_fdp 00:15:28.018 ************************************ 00:15:28.018 13:10:15 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:15:28.274 * Looking for test storage... 00:15:28.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.274 13:10:15 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:28.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.274 --rc genhtml_branch_coverage=1 00:15:28.274 --rc genhtml_function_coverage=1 00:15:28.274 --rc genhtml_legend=1 00:15:28.274 --rc geninfo_all_blocks=1 00:15:28.274 --rc geninfo_unexecuted_blocks=1 00:15:28.274 00:15:28.274 ' 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:28.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.274 --rc genhtml_branch_coverage=1 00:15:28.274 --rc genhtml_function_coverage=1 00:15:28.274 --rc genhtml_legend=1 00:15:28.274 --rc geninfo_all_blocks=1 00:15:28.274 --rc geninfo_unexecuted_blocks=1 00:15:28.274 00:15:28.274 ' 00:15:28.274 13:10:15 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:28.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.275 --rc genhtml_branch_coverage=1 00:15:28.275 --rc genhtml_function_coverage=1 00:15:28.275 --rc genhtml_legend=1 00:15:28.275 --rc geninfo_all_blocks=1 00:15:28.275 --rc geninfo_unexecuted_blocks=1 00:15:28.275 00:15:28.275 ' 00:15:28.275 13:10:15 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:28.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.275 --rc genhtml_branch_coverage=1 00:15:28.275 --rc genhtml_function_coverage=1 00:15:28.275 --rc genhtml_legend=1 00:15:28.275 --rc geninfo_all_blocks=1 00:15:28.275 --rc geninfo_unexecuted_blocks=1 00:15:28.275 00:15:28.275 ' 00:15:28.275 13:10:15 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.275 13:10:15 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.275 13:10:15 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.275 13:10:15 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.275 13:10:15 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.275 13:10:15 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.275 13:10:15 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.275 13:10:15 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.275 13:10:15 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:15:28.275 13:10:15 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:28.275 13:10:15 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:15:28.275 13:10:15 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.275 13:10:15 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:28.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:28.789 Waiting for block devices as requested 00:15:28.789 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.050 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.050 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.050 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:34.330 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:34.330 13:10:21 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:34.330 13:10:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:34.330 13:10:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:34.330 13:10:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:34.330 13:10:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:34.330 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.331 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.332 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.333 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:34.334 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.335 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:34.336 13:10:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:34.336 13:10:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:34.336 13:10:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:34.336 13:10:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:34.336 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.337 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.338 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.339 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:34.340 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:34.341 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.606 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.607 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:34.608 13:10:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:34.608 13:10:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:34.608 13:10:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:34.608 13:10:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:34.608 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:34.609 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.610 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:34.611 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.612 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.613 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.614 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:34.615 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:34.616 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.617 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.618 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.619 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:34.620 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:34.621 13:10:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:34.621 13:10:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:34.621 13:10:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:34.621 13:10:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:34.621 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.622 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.882 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:34.883 13:10:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:15:34.883 13:10:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:15:34.883 13:10:21 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:15:34.883 13:10:21 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:35.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:35.705 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:35.963 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:35.963 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:35.963 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:35.963 13:10:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:35.963 13:10:22 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:35.963 13:10:22 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.963 13:10:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:35.963 ************************************ 00:15:35.963 START TEST nvme_flexible_data_placement 00:15:35.963 ************************************ 00:15:35.963 13:10:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:36.220 Initializing NVMe Controllers 00:15:36.220 Attaching to 0000:00:13.0 00:15:36.220 Controller supports FDP Attached to 0000:00:13.0 00:15:36.220 Namespace ID: 1 Endurance Group ID: 1 00:15:36.220 Initialization complete. 00:15:36.220 00:15:36.220 ================================== 00:15:36.220 == FDP tests for Namespace: #01 == 00:15:36.220 ================================== 00:15:36.220 00:15:36.220 Get Feature: FDP: 00:15:36.220 ================= 00:15:36.220 Enabled: Yes 00:15:36.220 FDP configuration Index: 0 00:15:36.220 00:15:36.220 FDP configurations log page 00:15:36.220 =========================== 00:15:36.220 Number of FDP configurations: 1 00:15:36.220 Version: 0 00:15:36.220 Size: 112 00:15:36.220 FDP Configuration Descriptor: 0 00:15:36.220 Descriptor Size: 96 00:15:36.220 Reclaim Group Identifier format: 2 00:15:36.220 FDP Volatile Write Cache: Not Present 00:15:36.220 FDP Configuration: Valid 00:15:36.220 Vendor Specific Size: 0 00:15:36.220 Number of Reclaim Groups: 2 00:15:36.220 Number of Recalim Unit Handles: 8 00:15:36.220 Max Placement Identifiers: 128 00:15:36.220 Number of Namespaces Suppprted: 256 00:15:36.220 Reclaim unit Nominal Size: 6000000 bytes 00:15:36.220 Estimated Reclaim Unit Time Limit: Not Reported 00:15:36.220 RUH Desc #000: RUH Type: Initially Isolated 00:15:36.220 RUH Desc #001: RUH Type: Initially Isolated 00:15:36.221 RUH Desc #002: RUH Type: Initially Isolated 00:15:36.221 RUH Desc #003: RUH Type: Initially Isolated 00:15:36.221 RUH Desc #004: RUH Type: Initially Isolated 00:15:36.221 RUH Desc #005: RUH Type: Initially Isolated 00:15:36.221 RUH Desc #006: RUH Type: Initially Isolated 00:15:36.221 RUH Desc #007: RUH Type: Initially Isolated 00:15:36.221 00:15:36.221 FDP reclaim unit handle usage log page 00:15:36.221 ====================================== 00:15:36.221 Number of Reclaim Unit Handles: 8 00:15:36.221 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:36.221 RUH Usage Desc #001: RUH Attributes: Unused 00:15:36.221 RUH Usage Desc #002: RUH Attributes: Unused 00:15:36.221 RUH Usage Desc #003: RUH Attributes: Unused 00:15:36.221 RUH Usage Desc #004: RUH Attributes: Unused 00:15:36.221 RUH Usage Desc #005: RUH Attributes: Unused 00:15:36.221 RUH Usage Desc #006: RUH Attributes: Unused 00:15:36.221 RUH Usage Desc #007: RUH Attributes: Unused 00:15:36.221 00:15:36.221 FDP statistics log page 00:15:36.221 ======================= 00:15:36.221 Host bytes with metadata written: 857276416 00:15:36.221 Media bytes with metadata written: 857522176 00:15:36.221 Media bytes erased: 0 00:15:36.221 00:15:36.221 FDP Reclaim unit handle status 00:15:36.221 ============================== 00:15:36.221 Number of RUHS descriptors: 2 00:15:36.221 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002e70 00:15:36.221 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:15:36.221 00:15:36.221 FDP write on placement id: 0 success 00:15:36.221 00:15:36.221 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:15:36.221 00:15:36.221 IO mgmt send: RUH update for Placement ID: #0 Success 00:15:36.221 00:15:36.221 Get Feature: FDP Events for Placement handle: #0 00:15:36.221 ======================== 00:15:36.221 Number of FDP Events: 6 00:15:36.221 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:15:36.221 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:15:36.221 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:15:36.221 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:15:36.221 FDP Event: #4 Type: Media Reallocated Enabled: No 00:15:36.221 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:15:36.221 00:15:36.221 FDP events log page 00:15:36.221 =================== 00:15:36.221 Number of FDP events: 1 00:15:36.221 FDP Event #0: 00:15:36.221 Event Type: RU Not Written to Capacity 00:15:36.221 Placement Identifier: Valid 00:15:36.221 NSID: Valid 00:15:36.221 Location: Valid 00:15:36.221 Placement Identifier: 0 00:15:36.221 Event Timestamp: 8 00:15:36.221 Namespace Identifier: 1 00:15:36.221 Reclaim Group Identifier: 0 00:15:36.221 Reclaim Unit Handle Identifier: 0 00:15:36.221 00:15:36.221 FDP test passed 00:15:36.221 00:15:36.221 real 0m0.308s 00:15:36.221 user 0m0.124s 00:15:36.221 sys 0m0.082s 00:15:36.221 13:10:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.221 ************************************ 00:15:36.221 END TEST nvme_flexible_data_placement 00:15:36.221 13:10:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:15:36.221 ************************************ 00:15:36.479 00:15:36.479 real 0m8.218s 00:15:36.479 user 0m1.516s 00:15:36.479 sys 0m1.720s 00:15:36.479 13:10:23 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.479 ************************************ 00:15:36.479 END TEST nvme_fdp 00:15:36.479 13:10:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:36.479 ************************************ 00:15:36.479 13:10:23 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:15:36.479 13:10:23 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:36.479 13:10:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:36.479 13:10:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.479 13:10:23 -- common/autotest_common.sh@10 -- # set +x 00:15:36.479 ************************************ 00:15:36.479 START TEST nvme_rpc 00:15:36.479 ************************************ 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:36.479 * Looking for test storage... 00:15:36.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.479 13:10:23 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.479 --rc genhtml_branch_coverage=1 00:15:36.479 --rc genhtml_function_coverage=1 00:15:36.479 --rc genhtml_legend=1 00:15:36.479 --rc geninfo_all_blocks=1 00:15:36.479 --rc geninfo_unexecuted_blocks=1 00:15:36.479 00:15:36.479 ' 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.479 --rc genhtml_branch_coverage=1 00:15:36.479 --rc genhtml_function_coverage=1 00:15:36.479 --rc genhtml_legend=1 00:15:36.479 --rc geninfo_all_blocks=1 00:15:36.479 --rc geninfo_unexecuted_blocks=1 00:15:36.479 00:15:36.479 ' 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.479 --rc genhtml_branch_coverage=1 00:15:36.479 --rc genhtml_function_coverage=1 00:15:36.479 --rc genhtml_legend=1 00:15:36.479 --rc geninfo_all_blocks=1 00:15:36.479 --rc geninfo_unexecuted_blocks=1 00:15:36.479 00:15:36.479 ' 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:36.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.479 --rc genhtml_branch_coverage=1 00:15:36.479 --rc genhtml_function_coverage=1 00:15:36.479 --rc genhtml_legend=1 00:15:36.479 --rc geninfo_all_blocks=1 00:15:36.479 --rc geninfo_unexecuted_blocks=1 00:15:36.479 00:15:36.479 ' 00:15:36.479 13:10:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.479 13:10:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:36.479 13:10:23 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:36.738 13:10:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:15:36.738 13:10:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67298 00:15:36.738 13:10:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:36.738 13:10:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:36.738 13:10:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67298 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67298 ']' 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.738 13:10:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.738 [2024-12-06 13:10:23.684005] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:36.738 [2024-12-06 13:10:23.684218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67298 ] 00:15:36.996 [2024-12-06 13:10:23.878475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:37.255 [2024-12-06 13:10:24.035628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.255 [2024-12-06 13:10:24.035636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.193 13:10:24 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.193 13:10:24 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:38.193 13:10:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:15:38.450 Nvme0n1 00:15:38.450 13:10:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:38.450 13:10:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:38.708 request: 00:15:38.708 { 00:15:38.708 "bdev_name": "Nvme0n1", 00:15:38.708 "filename": "non_existing_file", 00:15:38.708 "method": "bdev_nvme_apply_firmware", 00:15:38.708 "req_id": 1 00:15:38.708 } 00:15:38.708 Got JSON-RPC error response 00:15:38.708 response: 00:15:38.708 { 00:15:38.708 "code": -32603, 00:15:38.708 "message": "open file failed." 00:15:38.708 } 00:15:38.708 13:10:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:38.708 13:10:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:38.708 13:10:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:39.274 13:10:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:39.274 13:10:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67298 00:15:39.274 13:10:25 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67298 ']' 00:15:39.274 13:10:25 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67298 00:15:39.274 13:10:25 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:39.274 13:10:25 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.274 13:10:25 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67298 00:15:39.274 13:10:26 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.274 killing process with pid 67298 00:15:39.274 13:10:26 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.274 13:10:26 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67298' 00:15:39.274 13:10:26 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67298 00:15:39.274 13:10:26 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67298 00:15:41.174 00:15:41.175 real 0m4.820s 00:15:41.175 user 0m9.419s 00:15:41.175 sys 0m0.739s 00:15:41.175 13:10:28 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.175 13:10:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.175 ************************************ 00:15:41.175 END TEST nvme_rpc 00:15:41.175 ************************************ 00:15:41.175 13:10:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:41.175 13:10:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.175 13:10:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.175 13:10:28 -- common/autotest_common.sh@10 -- # set +x 00:15:41.175 ************************************ 00:15:41.175 START TEST nvme_rpc_timeouts 00:15:41.175 ************************************ 00:15:41.175 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:41.434 * Looking for test storage... 00:15:41.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.434 13:10:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:41.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.434 --rc genhtml_branch_coverage=1 00:15:41.434 --rc genhtml_function_coverage=1 00:15:41.434 --rc genhtml_legend=1 00:15:41.434 --rc geninfo_all_blocks=1 00:15:41.434 --rc geninfo_unexecuted_blocks=1 00:15:41.434 00:15:41.434 ' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:41.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.434 --rc genhtml_branch_coverage=1 00:15:41.434 --rc genhtml_function_coverage=1 00:15:41.434 --rc genhtml_legend=1 00:15:41.434 --rc geninfo_all_blocks=1 00:15:41.434 --rc geninfo_unexecuted_blocks=1 00:15:41.434 00:15:41.434 ' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:41.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.434 --rc genhtml_branch_coverage=1 00:15:41.434 --rc genhtml_function_coverage=1 00:15:41.434 --rc genhtml_legend=1 00:15:41.434 --rc geninfo_all_blocks=1 00:15:41.434 --rc geninfo_unexecuted_blocks=1 00:15:41.434 00:15:41.434 ' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:41.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.434 --rc genhtml_branch_coverage=1 00:15:41.434 --rc genhtml_function_coverage=1 00:15:41.434 --rc genhtml_legend=1 00:15:41.434 --rc geninfo_all_blocks=1 00:15:41.434 --rc geninfo_unexecuted_blocks=1 00:15:41.434 00:15:41.434 ' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.434 13:10:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67374 00:15:41.434 13:10:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67374 00:15:41.434 13:10:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67410 00:15:41.434 13:10:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:41.434 13:10:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:41.434 13:10:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67410 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67410 ']' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.434 13:10:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:41.692 [2024-12-06 13:10:28.515444] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:41.692 [2024-12-06 13:10:28.515607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67410 ] 00:15:41.692 [2024-12-06 13:10:28.690367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:41.951 [2024-12-06 13:10:28.821862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.951 [2024-12-06 13:10:28.821877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.915 13:10:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.915 13:10:29 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:15:42.915 Checking default timeout settings: 00:15:42.915 13:10:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:42.915 13:10:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:43.173 Making settings changes with rpc: 00:15:43.173 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:43.173 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:43.431 Check default vs. modified settings: 00:15:43.431 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:43.431 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67374 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67374 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:43.996 Setting action_on_timeout is changed as expected. 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67374 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67374 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:43.996 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:43.997 Setting timeout_us is changed as expected. 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67374 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67374 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:43.997 Setting timeout_admin_us is changed as expected. 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67374 /tmp/settings_modified_67374 00:15:43.997 13:10:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67410 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67410 ']' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67410 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67410 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.997 killing process with pid 67410 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67410' 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67410 00:15:43.997 13:10:30 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67410 00:15:46.586 RPC TIMEOUT SETTING TEST PASSED. 00:15:46.586 13:10:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:46.586 00:15:46.586 real 0m4.874s 00:15:46.586 user 0m9.366s 00:15:46.586 sys 0m0.757s 00:15:46.586 13:10:33 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.586 13:10:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:46.586 ************************************ 00:15:46.586 END TEST nvme_rpc_timeouts 00:15:46.586 ************************************ 00:15:46.586 13:10:33 -- spdk/autotest.sh@239 -- # uname -s 00:15:46.586 13:10:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:15:46.586 13:10:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:46.586 13:10:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:46.586 13:10:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.586 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.586 ************************************ 00:15:46.586 START TEST sw_hotplug 00:15:46.586 ************************************ 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:46.586 * Looking for test storage... 00:15:46.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.586 13:10:33 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:46.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.586 --rc genhtml_branch_coverage=1 00:15:46.586 --rc genhtml_function_coverage=1 00:15:46.586 --rc genhtml_legend=1 00:15:46.586 --rc geninfo_all_blocks=1 00:15:46.586 --rc geninfo_unexecuted_blocks=1 00:15:46.586 00:15:46.586 ' 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:46.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.586 --rc genhtml_branch_coverage=1 00:15:46.586 --rc genhtml_function_coverage=1 00:15:46.586 --rc genhtml_legend=1 00:15:46.586 --rc geninfo_all_blocks=1 00:15:46.586 --rc geninfo_unexecuted_blocks=1 00:15:46.586 00:15:46.586 ' 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:46.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.586 --rc genhtml_branch_coverage=1 00:15:46.586 --rc genhtml_function_coverage=1 00:15:46.586 --rc genhtml_legend=1 00:15:46.586 --rc geninfo_all_blocks=1 00:15:46.586 --rc geninfo_unexecuted_blocks=1 00:15:46.586 00:15:46.586 ' 00:15:46.586 13:10:33 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:46.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.586 --rc genhtml_branch_coverage=1 00:15:46.586 --rc genhtml_function_coverage=1 00:15:46.586 --rc genhtml_legend=1 00:15:46.586 --rc geninfo_all_blocks=1 00:15:46.586 --rc geninfo_unexecuted_blocks=1 00:15:46.586 00:15:46.586 ' 00:15:46.586 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:46.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.901 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.901 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.901 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.901 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.901 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:15:46.901 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:15:46.901 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:15:46.901 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@233 -- # local class 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:15:46.902 13:10:33 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:46.902 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:15:46.902 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:46.902 13:10:33 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:47.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:47.468 Waiting for block devices as requested 00:15:47.468 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:47.726 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:47.726 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:47.726 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:52.988 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:52.988 13:10:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:15:52.988 13:10:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:53.247 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:15:53.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:53.505 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:15:53.763 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:15:54.021 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:54.021 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:15:54.021 13:10:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68286 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:54.021 13:10:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:15:54.021 13:10:41 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:54.022 13:10:41 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:54.022 13:10:41 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:54.022 13:10:41 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:54.022 13:10:41 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:15:54.022 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:54.022 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:54.022 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:15:54.022 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:54.022 13:10:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:54.280 Initializing NVMe Controllers 00:15:54.280 Attaching to 0000:00:10.0 00:15:54.280 Attaching to 0000:00:11.0 00:15:54.280 Attached to 0000:00:10.0 00:15:54.280 Attached to 0000:00:11.0 00:15:54.280 Initialization complete. Starting I/O... 00:15:54.280 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:54.280 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:15:54.280 00:15:55.655 QEMU NVMe Ctrl (12340 ): 1179 I/Os completed (+1179) 00:15:55.656 QEMU NVMe Ctrl (12341 ): 1292 I/Os completed (+1292) 00:15:55.656 00:15:56.590 QEMU NVMe Ctrl (12340 ): 2394 I/Os completed (+1215) 00:15:56.590 QEMU NVMe Ctrl (12341 ): 2656 I/Os completed (+1364) 00:15:56.591 00:15:57.527 QEMU NVMe Ctrl (12340 ): 4097 I/Os completed (+1703) 00:15:57.527 QEMU NVMe Ctrl (12341 ): 4427 I/Os completed (+1771) 00:15:57.527 00:15:58.461 QEMU NVMe Ctrl (12340 ): 5533 I/Os completed (+1436) 00:15:58.461 QEMU NVMe Ctrl (12341 ): 5978 I/Os completed (+1551) 00:15:58.461 00:15:59.393 QEMU NVMe Ctrl (12340 ): 7480 I/Os completed (+1947) 00:15:59.393 QEMU NVMe Ctrl (12341 ): 8213 I/Os completed (+2235) 00:15:59.393 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:00.327 [2024-12-06 13:10:47.011476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:00.327 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:00.327 [2024-12-06 13:10:47.013521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.013594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.013625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.013653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:00.327 [2024-12-06 13:10:47.016591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.016654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.016679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.016701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:16:00.327 EAL: Scan for (pci) bus failed. 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:00.327 [2024-12-06 13:10:47.039032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:00.327 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:00.327 [2024-12-06 13:10:47.040862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.040924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.040959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.040987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:00.327 [2024-12-06 13:10:47.043784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.043840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.043869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 [2024-12-06 13:10:47.043892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:00.327 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:00.327 EAL: Scan for (pci) bus failed. 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:00.327 Attaching to 0000:00:10.0 00:16:00.327 Attached to 0000:00:10.0 00:16:00.327 QEMU NVMe Ctrl (12340 ): 20 I/Os completed (+20) 00:16:00.327 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:00.327 13:10:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:00.327 Attaching to 0000:00:11.0 00:16:00.327 Attached to 0000:00:11.0 00:16:01.262 QEMU NVMe Ctrl (12340 ): 1656 I/Os completed (+1636) 00:16:01.262 QEMU NVMe Ctrl (12341 ): 1541 I/Os completed (+1541) 00:16:01.262 00:16:02.637 QEMU NVMe Ctrl (12340 ): 3365 I/Os completed (+1709) 00:16:02.637 QEMU NVMe Ctrl (12341 ): 3295 I/Os completed (+1754) 00:16:02.637 00:16:03.573 QEMU NVMe Ctrl (12340 ): 5077 I/Os completed (+1712) 00:16:03.573 QEMU NVMe Ctrl (12341 ): 5057 I/Os completed (+1762) 00:16:03.573 00:16:04.506 QEMU NVMe Ctrl (12340 ): 6742 I/Os completed (+1665) 00:16:04.506 QEMU NVMe Ctrl (12341 ): 6764 I/Os completed (+1707) 00:16:04.506 00:16:05.514 QEMU NVMe Ctrl (12340 ): 8262 I/Os completed (+1520) 00:16:05.514 QEMU NVMe Ctrl (12341 ): 8369 I/Os completed (+1605) 00:16:05.514 00:16:06.446 QEMU NVMe Ctrl (12340 ): 9899 I/Os completed (+1637) 00:16:06.446 QEMU NVMe Ctrl (12341 ): 10074 I/Os completed (+1705) 00:16:06.446 00:16:07.376 QEMU NVMe Ctrl (12340 ): 11565 I/Os completed (+1666) 00:16:07.376 QEMU NVMe Ctrl (12341 ): 11858 I/Os completed (+1784) 00:16:07.376 00:16:08.308 QEMU NVMe Ctrl (12340 ): 12970 I/Os completed (+1405) 00:16:08.308 QEMU NVMe Ctrl (12341 ): 13332 I/Os completed (+1474) 00:16:08.308 00:16:09.689 QEMU NVMe Ctrl (12340 ): 14554 I/Os completed (+1584) 00:16:09.689 QEMU NVMe Ctrl (12341 ): 14997 I/Os completed (+1665) 00:16:09.689 00:16:10.256 QEMU NVMe Ctrl (12340 ): 16277 I/Os completed (+1723) 00:16:10.256 QEMU NVMe Ctrl (12341 ): 16728 I/Os completed (+1731) 00:16:10.256 00:16:11.632 QEMU NVMe Ctrl (12340 ): 17929 I/Os completed (+1652) 00:16:11.632 QEMU NVMe Ctrl (12341 ): 18426 I/Os completed (+1698) 00:16:11.632 00:16:12.568 QEMU NVMe Ctrl (12340 ): 19487 I/Os completed (+1558) 00:16:12.568 QEMU NVMe Ctrl (12341 ): 20061 I/Os completed (+1635) 00:16:12.568 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:12.568 [2024-12-06 13:10:59.327735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:12.568 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:12.568 [2024-12-06 13:10:59.329669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.329739] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.329773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.329801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:12.568 [2024-12-06 13:10:59.332669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.332728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.332754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.332776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:12.568 [2024-12-06 13:10:59.360842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:12.568 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:12.568 [2024-12-06 13:10:59.362823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.362885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.362921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.362946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:12.568 [2024-12-06 13:10:59.365564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.365614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.365640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 [2024-12-06 13:10:59.365666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:12.568 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:12.569 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:12.569 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:12.569 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:12.569 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:12.569 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:12.569 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:12.828 Attaching to 0000:00:10.0 00:16:12.828 Attached to 0000:00:10.0 00:16:12.828 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:12.828 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:12.828 13:10:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:12.828 Attaching to 0000:00:11.0 00:16:12.828 Attached to 0000:00:11.0 00:16:13.471 QEMU NVMe Ctrl (12340 ): 1068 I/Os completed (+1068) 00:16:13.471 QEMU NVMe Ctrl (12341 ): 961 I/Os completed (+961) 00:16:13.471 00:16:14.407 QEMU NVMe Ctrl (12340 ): 2786 I/Os completed (+1718) 00:16:14.407 QEMU NVMe Ctrl (12341 ): 2720 I/Os completed (+1759) 00:16:14.407 00:16:15.345 QEMU NVMe Ctrl (12340 ): 4478 I/Os completed (+1692) 00:16:15.345 QEMU NVMe Ctrl (12341 ): 4466 I/Os completed (+1746) 00:16:15.345 00:16:16.281 QEMU NVMe Ctrl (12340 ): 6258 I/Os completed (+1780) 00:16:16.281 QEMU NVMe Ctrl (12341 ): 6258 I/Os completed (+1792) 00:16:16.281 00:16:17.661 QEMU NVMe Ctrl (12340 ): 7864 I/Os completed (+1606) 00:16:17.661 QEMU NVMe Ctrl (12341 ): 7904 I/Os completed (+1646) 00:16:17.661 00:16:18.597 QEMU NVMe Ctrl (12340 ): 9695 I/Os completed (+1831) 00:16:18.597 QEMU NVMe Ctrl (12341 ): 9762 I/Os completed (+1858) 00:16:18.597 00:16:19.533 QEMU NVMe Ctrl (12340 ): 11407 I/Os completed (+1712) 00:16:19.533 QEMU NVMe Ctrl (12341 ): 11520 I/Os completed (+1758) 00:16:19.533 00:16:20.468 QEMU NVMe Ctrl (12340 ): 13144 I/Os completed (+1737) 00:16:20.468 QEMU NVMe Ctrl (12341 ): 13289 I/Os completed (+1769) 00:16:20.468 00:16:21.401 QEMU NVMe Ctrl (12340 ): 14984 I/Os completed (+1840) 00:16:21.401 QEMU NVMe Ctrl (12341 ): 15134 I/Os completed (+1845) 00:16:21.401 00:16:22.397 QEMU NVMe Ctrl (12340 ): 16537 I/Os completed (+1553) 00:16:22.397 QEMU NVMe Ctrl (12341 ): 16731 I/Os completed (+1597) 00:16:22.397 00:16:23.332 QEMU NVMe Ctrl (12340 ): 18225 I/Os completed (+1688) 00:16:23.332 QEMU NVMe Ctrl (12341 ): 18465 I/Os completed (+1734) 00:16:23.332 00:16:24.266 QEMU NVMe Ctrl (12340 ): 20017 I/Os completed (+1792) 00:16:24.266 QEMU NVMe Ctrl (12341 ): 20262 I/Os completed (+1797) 00:16:24.266 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:24.882 [2024-12-06 13:11:11.672939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:24.882 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:24.882 [2024-12-06 13:11:11.674919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.674985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.675015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.675045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:24.882 [2024-12-06 13:11:11.678077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.678157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.678185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.678209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:24.882 [2024-12-06 13:11:11.706722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:24.882 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:24.882 [2024-12-06 13:11:11.708450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.708507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.708540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.708565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:24.882 [2024-12-06 13:11:11.711212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.711274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.711304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 [2024-12-06 13:11:11.711325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.882 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:24.882 EAL: Scan for (pci) bus failed. 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:24.882 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:25.140 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:25.140 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:25.140 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:25.140 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:25.140 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:25.140 Attaching to 0000:00:10.0 00:16:25.140 Attached to 0000:00:10.0 00:16:25.140 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:25.140 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:25.140 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:25.140 Attaching to 0000:00:11.0 00:16:25.140 Attached to 0000:00:11.0 00:16:25.140 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:25.140 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:25.140 [2024-12-06 13:11:12.053448] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:16:37.464 13:11:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:37.464 13:11:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:37.464 13:11:24 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.04 00:16:37.464 13:11:24 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.04 00:16:37.464 13:11:24 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:37.464 13:11:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.04 00:16:37.464 13:11:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.04 2 00:16:37.464 remove_attach_helper took 43.04s to complete (handling 2 nvme drive(s)) 13:11:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68286 00:16:44.044 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68286) - No such process 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68286 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68828 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:44.044 13:11:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68828 00:16:44.044 13:11:30 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68828 ']' 00:16:44.044 13:11:30 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.044 13:11:30 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.044 13:11:30 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.044 13:11:30 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.044 13:11:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:44.044 [2024-12-06 13:11:30.186319] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:44.044 [2024-12-06 13:11:30.186497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68828 ] 00:16:44.044 [2024-12-06 13:11:30.375035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.044 [2024-12-06 13:11:30.537315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:44.610 13:11:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:44.610 13:11:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:51.172 13:11:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.172 13:11:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:51.172 [2024-12-06 13:11:37.500485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:51.172 [2024-12-06 13:11:37.503751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:37.503816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:37.503841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.172 [2024-12-06 13:11:37.503912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:37.503938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:37.503960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.172 [2024-12-06 13:11:37.503977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:37.503995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:37.504010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.172 [2024-12-06 13:11:37.504031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:37.504045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:37.504063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.172 13:11:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:51.172 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:51.172 13:11:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.172 13:11:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:51.172 13:11:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.172 [2024-12-06 13:11:38.100436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:51.172 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:51.172 [2024-12-06 13:11:38.103464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:38.103514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:38.103540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.172 [2024-12-06 13:11:38.103569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:38.103587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:38.103603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.172 [2024-12-06 13:11:38.103621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:38.103637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:38.103654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.172 [2024-12-06 13:11:38.103670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.172 [2024-12-06 13:11:38.103688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.172 [2024-12-06 13:11:38.103702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:51.738 13:11:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.738 13:11:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:51.738 13:11:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:51.738 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:51.996 13:11:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:04.201 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:04.201 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:04.201 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:04.201 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:04.201 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:04.201 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:04.201 13:11:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.201 13:11:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:04.201 13:11:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:04.201 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:04.202 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:04.202 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:04.202 13:11:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.202 13:11:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:04.202 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:04.202 13:11:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.202 [2024-12-06 13:11:51.100641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:04.202 [2024-12-06 13:11:51.103572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.202 [2024-12-06 13:11:51.103663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.202 [2024-12-06 13:11:51.103687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.202 [2024-12-06 13:11:51.103719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.202 [2024-12-06 13:11:51.103735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.202 [2024-12-06 13:11:51.103753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.202 [2024-12-06 13:11:51.103768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.202 [2024-12-06 13:11:51.103786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.202 [2024-12-06 13:11:51.103801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.202 [2024-12-06 13:11:51.103819] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.202 [2024-12-06 13:11:51.103833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.202 [2024-12-06 13:11:51.103850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.202 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:04.202 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:04.770 [2024-12-06 13:11:51.500646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:04.770 [2024-12-06 13:11:51.503738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.770 [2024-12-06 13:11:51.503796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.770 [2024-12-06 13:11:51.503825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.770 [2024-12-06 13:11:51.503853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.770 [2024-12-06 13:11:51.503873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.770 [2024-12-06 13:11:51.503888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.770 [2024-12-06 13:11:51.503907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.770 [2024-12-06 13:11:51.503922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.770 [2024-12-06 13:11:51.503939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.770 [2024-12-06 13:11:51.503954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.770 [2024-12-06 13:11:51.503971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.770 [2024-12-06 13:11:51.503985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:04.770 13:11:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.770 13:11:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:04.770 13:11:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:04.770 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:05.029 13:11:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:17.249 13:12:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:17.249 13:12:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:17.249 13:12:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:17.249 13:12:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:17.249 13:12:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:17.249 13:12:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:17.249 13:12:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.249 13:12:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 13:12:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:17.249 [2024-12-06 13:12:04.100853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:17.249 [2024-12-06 13:12:04.104517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.249 [2024-12-06 13:12:04.104710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.249 [2024-12-06 13:12:04.104878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.249 [2024-12-06 13:12:04.105150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.249 [2024-12-06 13:12:04.105301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.249 [2024-12-06 13:12:04.105469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.249 [2024-12-06 13:12:04.105634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.249 [2024-12-06 13:12:04.105695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.249 [2024-12-06 13:12:04.105928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.249 [2024-12-06 13:12:04.106111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.249 [2024-12-06 13:12:04.106193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.249 [2024-12-06 13:12:04.106365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:17.249 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:17.250 13:12:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.250 13:12:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:17.250 13:12:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.250 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:17.250 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:17.815 [2024-12-06 13:12:04.600861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:17.815 [2024-12-06 13:12:04.604624] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.815 [2024-12-06 13:12:04.604836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.815 [2024-12-06 13:12:04.605034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.815 [2024-12-06 13:12:04.605274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.815 [2024-12-06 13:12:04.605478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.815 [2024-12-06 13:12:04.605556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.815 [2024-12-06 13:12:04.605757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.816 [2024-12-06 13:12:04.605940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.816 [2024-12-06 13:12:04.606056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.816 [2024-12-06 13:12:04.606281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:17.816 [2024-12-06 13:12:04.606350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.816 [2024-12-06 13:12:04.606543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:17.816 13:12:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.816 13:12:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:17.816 13:12:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:17.816 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:18.074 13:12:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:18.074 13:12:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:18.074 13:12:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:18.074 13:12:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.69 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.69 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.69 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.69 2 00:17:30.276 remove_attach_helper took 45.69s to complete (handling 2 nvme drive(s)) 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:30.276 13:12:17 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:30.276 13:12:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:36.879 13:12:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.879 13:12:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:36.879 13:12:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.879 [2024-12-06 13:12:23.216958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:36.879 [2024-12-06 13:12:23.219815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:36.879 [2024-12-06 13:12:23.219883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.879 [2024-12-06 13:12:23.219908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.879 [2024-12-06 13:12:23.219940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.879 [2024-12-06 13:12:23.219957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.879 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:36.879 [2024-12-06 13:12:23.219975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.879 [2024-12-06 13:12:23.219993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.879 [2024-12-06 13:12:23.220011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.879 [2024-12-06 13:12:23.220025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.879 [2024-12-06 13:12:23.220045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.879 [2024-12-06 13:12:23.220060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.879 [2024-12-06 13:12:23.220080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.879 [2024-12-06 13:12:23.616960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:36.879 [2024-12-06 13:12:23.619200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.879 [2024-12-06 13:12:23.619252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.879 [2024-12-06 13:12:23.619281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.879 [2024-12-06 13:12:23.619309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.879 [2024-12-06 13:12:23.619329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.879 [2024-12-06 13:12:23.619344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.880 [2024-12-06 13:12:23.619365] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.880 [2024-12-06 13:12:23.619380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.880 [2024-12-06 13:12:23.619397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.880 [2024-12-06 13:12:23.619413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:36.880 [2024-12-06 13:12:23.619441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.880 [2024-12-06 13:12:23.619456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:36.880 13:12:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.880 13:12:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:36.880 13:12:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:36.880 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:37.138 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:37.138 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:37.138 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:37.138 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:37.138 13:12:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:37.138 13:12:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:37.138 13:12:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:37.138 13:12:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:49.337 13:12:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.337 13:12:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:49.337 13:12:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:49.337 13:12:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.337 13:12:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:49.337 13:12:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.337 [2024-12-06 13:12:36.217171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:49.337 [2024-12-06 13:12:36.219718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.337 [2024-12-06 13:12:36.219889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.337 [2024-12-06 13:12:36.220040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.337 [2024-12-06 13:12:36.220285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.337 [2024-12-06 13:12:36.220454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.337 [2024-12-06 13:12:36.220614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.337 [2024-12-06 13:12:36.220773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.337 [2024-12-06 13:12:36.220925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.337 [2024-12-06 13:12:36.221066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.337 [2024-12-06 13:12:36.221226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.337 [2024-12-06 13:12:36.221401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.337 [2024-12-06 13:12:36.221555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:49.337 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:49.904 [2024-12-06 13:12:36.617169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:49.904 [2024-12-06 13:12:36.619504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.904 [2024-12-06 13:12:36.619720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.904 [2024-12-06 13:12:36.619883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.904 [2024-12-06 13:12:36.620058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.904 [2024-12-06 13:12:36.620318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.904 [2024-12-06 13:12:36.620481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.904 [2024-12-06 13:12:36.620726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.904 [2024-12-06 13:12:36.620929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.904 [2024-12-06 13:12:36.621091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.904 [2024-12-06 13:12:36.621330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:49.904 [2024-12-06 13:12:36.621578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.904 [2024-12-06 13:12:36.621729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:49.904 13:12:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.904 13:12:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:49.904 13:12:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:49.904 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:50.162 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:50.162 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:50.162 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:50.162 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:50.162 13:12:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:50.162 13:12:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:50.162 13:12:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:50.162 13:12:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:02.407 13:12:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.407 13:12:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:02.407 13:12:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:02.407 13:12:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.407 13:12:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:02.407 13:12:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.407 [2024-12-06 13:12:49.217343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:02.407 [2024-12-06 13:12:49.219582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.407 [2024-12-06 13:12:49.219749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.407 [2024-12-06 13:12:49.219928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.407 [2024-12-06 13:12:49.220172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.407 [2024-12-06 13:12:49.220383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.407 [2024-12-06 13:12:49.220537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.407 [2024-12-06 13:12:49.220687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.407 [2024-12-06 13:12:49.220821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.407 [2024-12-06 13:12:49.220940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.407 [2024-12-06 13:12:49.221086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.407 [2024-12-06 13:12:49.221267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.407 [2024-12-06 13:12:49.221421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:02.407 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:02.666 [2024-12-06 13:12:49.617359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:02.666 [2024-12-06 13:12:49.619800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.666 [2024-12-06 13:12:49.619999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.667 [2024-12-06 13:12:49.620254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.667 [2024-12-06 13:12:49.620552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.667 [2024-12-06 13:12:49.620724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.667 [2024-12-06 13:12:49.620878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.667 [2024-12-06 13:12:49.621089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.667 [2024-12-06 13:12:49.621249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.667 [2024-12-06 13:12:49.621327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.667 [2024-12-06 13:12:49.621499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:02.667 [2024-12-06 13:12:49.621722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.667 [2024-12-06 13:12:49.621890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:02.925 13:12:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.925 13:12:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:02.925 13:12:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:02.925 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:03.183 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:03.183 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:03.183 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:03.183 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:03.183 13:12:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:03.183 13:12:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:03.183 13:12:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:03.183 13:12:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.00 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.00 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.00 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.00 2 00:18:15.378 remove_attach_helper took 45.00s to complete (handling 2 nvme drive(s)) 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:15.378 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68828 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68828 ']' 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68828 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68828 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.378 13:13:02 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68828' 00:18:15.378 killing process with pid 68828 00:18:15.379 13:13:02 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68828 00:18:15.379 13:13:02 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68828 00:18:17.910 13:13:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:17.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:18.477 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.477 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.477 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:18.477 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:18.477 00:18:18.477 real 2m32.343s 00:18:18.477 user 1m52.725s 00:18:18.477 sys 0m19.464s 00:18:18.477 13:13:05 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.477 ************************************ 00:18:18.477 END TEST sw_hotplug 00:18:18.477 ************************************ 00:18:18.477 13:13:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:18.736 13:13:05 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:18.736 13:13:05 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:18.736 13:13:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.736 13:13:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.736 13:13:05 -- common/autotest_common.sh@10 -- # set +x 00:18:18.736 ************************************ 00:18:18.736 START TEST nvme_xnvme 00:18:18.736 ************************************ 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:18.736 * Looking for test storage... 00:18:18.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.736 13:13:05 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.736 --rc genhtml_branch_coverage=1 00:18:18.736 --rc genhtml_function_coverage=1 00:18:18.736 --rc genhtml_legend=1 00:18:18.736 --rc geninfo_all_blocks=1 00:18:18.736 --rc geninfo_unexecuted_blocks=1 00:18:18.736 00:18:18.736 ' 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.736 --rc genhtml_branch_coverage=1 00:18:18.736 --rc genhtml_function_coverage=1 00:18:18.736 --rc genhtml_legend=1 00:18:18.736 --rc geninfo_all_blocks=1 00:18:18.736 --rc geninfo_unexecuted_blocks=1 00:18:18.736 00:18:18.736 ' 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.736 --rc genhtml_branch_coverage=1 00:18:18.736 --rc genhtml_function_coverage=1 00:18:18.736 --rc genhtml_legend=1 00:18:18.736 --rc geninfo_all_blocks=1 00:18:18.736 --rc geninfo_unexecuted_blocks=1 00:18:18.736 00:18:18.736 ' 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.736 --rc genhtml_branch_coverage=1 00:18:18.736 --rc genhtml_function_coverage=1 00:18:18.736 --rc genhtml_legend=1 00:18:18.736 --rc geninfo_all_blocks=1 00:18:18.736 --rc geninfo_unexecuted_blocks=1 00:18:18.736 00:18:18.736 ' 00:18:18.736 13:13:05 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:18:18.736 13:13:05 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:18.736 13:13:05 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:18.736 13:13:05 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:18.737 13:13:05 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:18:18.737 13:13:05 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:18.737 13:13:05 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:18.737 #define SPDK_CONFIG_H 00:18:18.737 #define SPDK_CONFIG_AIO_FSDEV 1 00:18:18.737 #define SPDK_CONFIG_APPS 1 00:18:18.737 #define SPDK_CONFIG_ARCH native 00:18:18.737 #define SPDK_CONFIG_ASAN 1 00:18:18.737 #undef SPDK_CONFIG_AVAHI 00:18:18.737 #undef SPDK_CONFIG_CET 00:18:18.737 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:18:18.737 #define SPDK_CONFIG_COVERAGE 1 00:18:18.737 #define SPDK_CONFIG_CROSS_PREFIX 00:18:18.737 #undef SPDK_CONFIG_CRYPTO 00:18:18.737 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:18.737 #undef SPDK_CONFIG_CUSTOMOCF 00:18:18.737 #undef SPDK_CONFIG_DAOS 00:18:18.737 #define SPDK_CONFIG_DAOS_DIR 00:18:18.737 #define SPDK_CONFIG_DEBUG 1 00:18:18.737 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:18.737 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:18.737 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:18.737 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:18.737 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:18.737 #undef SPDK_CONFIG_DPDK_UADK 00:18:18.737 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:18.737 #define SPDK_CONFIG_EXAMPLES 1 00:18:18.737 #undef SPDK_CONFIG_FC 00:18:18.737 #define SPDK_CONFIG_FC_PATH 00:18:18.737 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:18.737 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:18.737 #define SPDK_CONFIG_FSDEV 1 00:18:18.737 #undef SPDK_CONFIG_FUSE 00:18:18.737 #undef SPDK_CONFIG_FUZZER 00:18:18.737 #define SPDK_CONFIG_FUZZER_LIB 00:18:18.737 #undef SPDK_CONFIG_GOLANG 00:18:18.737 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:18.737 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:18.737 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:18.737 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:18.737 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:18.737 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:18.737 #undef SPDK_CONFIG_HAVE_LZ4 00:18:18.737 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:18:18.737 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:18:18.737 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:18.737 #define SPDK_CONFIG_IDXD 1 00:18:18.737 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:18.737 #undef SPDK_CONFIG_IPSEC_MB 00:18:18.737 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:18.737 #define SPDK_CONFIG_ISAL 1 00:18:18.737 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:18.737 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:18.737 #define SPDK_CONFIG_LIBDIR 00:18:18.737 #undef SPDK_CONFIG_LTO 00:18:18.737 #define SPDK_CONFIG_MAX_LCORES 128 00:18:18.737 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:18:18.737 #define SPDK_CONFIG_NVME_CUSE 1 00:18:18.737 #undef SPDK_CONFIG_OCF 00:18:18.737 #define SPDK_CONFIG_OCF_PATH 00:18:18.737 #define SPDK_CONFIG_OPENSSL_PATH 00:18:18.737 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:18.737 #define SPDK_CONFIG_PGO_DIR 00:18:18.737 #undef SPDK_CONFIG_PGO_USE 00:18:18.737 #define SPDK_CONFIG_PREFIX /usr/local 00:18:18.737 #undef SPDK_CONFIG_RAID5F 00:18:18.737 #undef SPDK_CONFIG_RBD 00:18:18.737 #define SPDK_CONFIG_RDMA 1 00:18:18.737 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:18.737 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:18.737 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:18.737 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:18.737 #define SPDK_CONFIG_SHARED 1 00:18:18.737 #undef SPDK_CONFIG_SMA 00:18:18.737 #define SPDK_CONFIG_TESTS 1 00:18:18.737 #undef SPDK_CONFIG_TSAN 00:18:18.737 #define SPDK_CONFIG_UBLK 1 00:18:18.737 #define SPDK_CONFIG_UBSAN 1 00:18:18.737 #undef SPDK_CONFIG_UNIT_TESTS 00:18:18.737 #undef SPDK_CONFIG_URING 00:18:18.737 #define SPDK_CONFIG_URING_PATH 00:18:18.737 #undef SPDK_CONFIG_URING_ZNS 00:18:18.737 #undef SPDK_CONFIG_USDT 00:18:18.737 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:18.738 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:18.738 #undef SPDK_CONFIG_VFIO_USER 00:18:18.738 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:18.738 #define SPDK_CONFIG_VHOST 1 00:18:18.738 #define SPDK_CONFIG_VIRTIO 1 00:18:18.738 #undef SPDK_CONFIG_VTUNE 00:18:18.738 #define SPDK_CONFIG_VTUNE_DIR 00:18:18.738 #define SPDK_CONFIG_WERROR 1 00:18:18.738 #define SPDK_CONFIG_WPDK_DIR 00:18:18.738 #define SPDK_CONFIG_XNVME 1 00:18:18.738 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:18.738 13:13:05 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:18.738 13:13:05 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.738 13:13:05 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.738 13:13:05 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.738 13:13:05 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.738 13:13:05 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.738 13:13:05 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.738 13:13:05 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.738 13:13:05 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.738 13:13:05 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:18.738 13:13:05 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.738 13:13:05 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:18.738 13:13:05 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@68 -- # uname -s 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:18.998 13:13:05 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:18:18.999 13:13:05 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:18.999 13:13:05 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70180 ]] 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70180 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.AZJYjf 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.AZJYjf/tests/xnvme /tmp/spdk.AZJYjf 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975281664 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592539136 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975281664 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592539136 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:18:19.000 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=93244641280 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6458138624 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:18:19.001 * Looking for test storage... 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975281664 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.001 --rc genhtml_branch_coverage=1 00:18:19.001 --rc genhtml_function_coverage=1 00:18:19.001 --rc genhtml_legend=1 00:18:19.001 --rc geninfo_all_blocks=1 00:18:19.001 --rc geninfo_unexecuted_blocks=1 00:18:19.001 00:18:19.001 ' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.001 --rc genhtml_branch_coverage=1 00:18:19.001 --rc genhtml_function_coverage=1 00:18:19.001 --rc genhtml_legend=1 00:18:19.001 --rc geninfo_all_blocks=1 00:18:19.001 --rc geninfo_unexecuted_blocks=1 00:18:19.001 00:18:19.001 ' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.001 --rc genhtml_branch_coverage=1 00:18:19.001 --rc genhtml_function_coverage=1 00:18:19.001 --rc genhtml_legend=1 00:18:19.001 --rc geninfo_all_blocks=1 00:18:19.001 --rc geninfo_unexecuted_blocks=1 00:18:19.001 00:18:19.001 ' 00:18:19.001 13:13:05 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.001 --rc genhtml_branch_coverage=1 00:18:19.001 --rc genhtml_function_coverage=1 00:18:19.001 --rc genhtml_legend=1 00:18:19.001 --rc geninfo_all_blocks=1 00:18:19.001 --rc geninfo_unexecuted_blocks=1 00:18:19.001 00:18:19.001 ' 00:18:19.001 13:13:05 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.001 13:13:05 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.001 13:13:05 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.001 13:13:05 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.002 13:13:05 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.002 13:13:05 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:19.002 13:13:05 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:18:19.002 13:13:05 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:19.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.568 Waiting for block devices as requested 00:18:19.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.826 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.826 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:25.093 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:25.093 13:13:11 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:18:25.359 13:13:12 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:18:25.359 13:13:12 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:18:25.620 13:13:12 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:18:25.620 13:13:12 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:18:25.620 No valid GPT data, bailing 00:18:25.620 13:13:12 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:25.620 13:13:12 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:18:25.620 13:13:12 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:25.620 13:13:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:25.620 13:13:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.620 13:13:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.620 13:13:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.620 ************************************ 00:18:25.620 START TEST xnvme_rpc 00:18:25.620 ************************************ 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70568 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70568 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70568 ']' 00:18:25.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.620 13:13:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.878 [2024-12-06 13:13:12.719083] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:25.878 [2024-12-06 13:13:12.719654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70568 ] 00:18:26.135 [2024-12-06 13:13:12.906431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.135 [2024-12-06 13:13:13.040170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.065 xnvme_bdev 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.065 13:13:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.065 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70568 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70568 ']' 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70568 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70568 00:18:27.322 killing process with pid 70568 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70568' 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70568 00:18:27.322 13:13:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70568 00:18:29.846 00:18:29.846 real 0m3.787s 00:18:29.846 user 0m4.060s 00:18:29.846 sys 0m0.601s 00:18:29.846 13:13:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.846 ************************************ 00:18:29.846 END TEST xnvme_rpc 00:18:29.846 ************************************ 00:18:29.846 13:13:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.846 13:13:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:29.846 13:13:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:29.846 13:13:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.846 13:13:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:29.846 ************************************ 00:18:29.846 START TEST xnvme_bdevperf 00:18:29.846 ************************************ 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:29.846 13:13:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:29.846 { 00:18:29.846 "subsystems": [ 00:18:29.846 { 00:18:29.846 "subsystem": "bdev", 00:18:29.846 "config": [ 00:18:29.846 { 00:18:29.846 "params": { 00:18:29.846 "io_mechanism": "libaio", 00:18:29.846 "conserve_cpu": false, 00:18:29.846 "filename": "/dev/nvme0n1", 00:18:29.846 "name": "xnvme_bdev" 00:18:29.846 }, 00:18:29.846 "method": "bdev_xnvme_create" 00:18:29.846 }, 00:18:29.846 { 00:18:29.846 "method": "bdev_wait_for_examine" 00:18:29.846 } 00:18:29.846 ] 00:18:29.846 } 00:18:29.846 ] 00:18:29.846 } 00:18:29.846 [2024-12-06 13:13:16.532637] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:29.846 [2024-12-06 13:13:16.532842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70653 ] 00:18:29.846 [2024-12-06 13:13:16.719351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.846 [2024-12-06 13:13:16.847259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.412 Running I/O for 5 seconds... 00:18:32.278 26356.00 IOPS, 102.95 MiB/s [2024-12-06T13:13:20.229Z] 26039.00 IOPS, 101.71 MiB/s [2024-12-06T13:13:21.602Z] 25600.67 IOPS, 100.00 MiB/s [2024-12-06T13:13:22.536Z] 24839.50 IOPS, 97.03 MiB/s 00:18:35.520 Latency(us) 00:18:35.520 [2024-12-06T13:13:22.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.520 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:35.520 xnvme_bdev : 5.00 24421.10 95.39 0.00 0.00 2614.11 217.83 8221.79 00:18:35.520 [2024-12-06T13:13:22.536Z] =================================================================================================================== 00:18:35.520 [2024-12-06T13:13:22.536Z] Total : 24421.10 95.39 0.00 0.00 2614.11 217.83 8221.79 00:18:36.455 13:13:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:36.455 13:13:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:36.455 13:13:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:36.455 13:13:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:36.455 13:13:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.455 { 00:18:36.455 "subsystems": [ 00:18:36.455 { 00:18:36.455 "subsystem": "bdev", 00:18:36.455 "config": [ 00:18:36.455 { 00:18:36.455 "params": { 00:18:36.455 "io_mechanism": "libaio", 00:18:36.455 "conserve_cpu": false, 00:18:36.455 "filename": "/dev/nvme0n1", 00:18:36.455 "name": "xnvme_bdev" 00:18:36.455 }, 00:18:36.455 "method": "bdev_xnvme_create" 00:18:36.455 }, 00:18:36.455 { 00:18:36.455 "method": "bdev_wait_for_examine" 00:18:36.455 } 00:18:36.455 ] 00:18:36.455 } 00:18:36.455 ] 00:18:36.455 } 00:18:36.455 [2024-12-06 13:13:23.407022] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:36.455 [2024-12-06 13:13:23.407247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70728 ] 00:18:36.714 [2024-12-06 13:13:23.594707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.714 [2024-12-06 13:13:23.722738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.282 Running I/O for 5 seconds... 00:18:39.154 26256.00 IOPS, 102.56 MiB/s [2024-12-06T13:13:27.105Z] 27502.50 IOPS, 107.43 MiB/s [2024-12-06T13:13:28.478Z] 29730.00 IOPS, 116.13 MiB/s [2024-12-06T13:13:29.411Z] 30091.25 IOPS, 117.54 MiB/s 00:18:42.395 Latency(us) 00:18:42.395 [2024-12-06T13:13:29.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.395 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:42.395 xnvme_bdev : 5.00 29926.46 116.90 0.00 0.00 2133.23 223.42 5749.29 00:18:42.395 [2024-12-06T13:13:29.411Z] =================================================================================================================== 00:18:42.395 [2024-12-06T13:13:29.411Z] Total : 29926.46 116.90 0.00 0.00 2133.23 223.42 5749.29 00:18:43.328 00:18:43.328 real 0m13.731s 00:18:43.328 user 0m5.170s 00:18:43.328 sys 0m6.185s 00:18:43.328 13:13:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.328 13:13:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 ************************************ 00:18:43.328 END TEST xnvme_bdevperf 00:18:43.328 ************************************ 00:18:43.328 13:13:30 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:43.328 13:13:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.328 13:13:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.328 13:13:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 ************************************ 00:18:43.328 START TEST xnvme_fio_plugin 00:18:43.328 ************************************ 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.328 13:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:43.328 { 00:18:43.328 "subsystems": [ 00:18:43.328 { 00:18:43.328 "subsystem": "bdev", 00:18:43.328 "config": [ 00:18:43.328 { 00:18:43.328 "params": { 00:18:43.328 "io_mechanism": "libaio", 00:18:43.328 "conserve_cpu": false, 00:18:43.328 "filename": "/dev/nvme0n1", 00:18:43.328 "name": "xnvme_bdev" 00:18:43.328 }, 00:18:43.328 "method": "bdev_xnvme_create" 00:18:43.328 }, 00:18:43.328 { 00:18:43.328 "method": "bdev_wait_for_examine" 00:18:43.328 } 00:18:43.328 ] 00:18:43.328 } 00:18:43.328 ] 00:18:43.328 } 00:18:43.624 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:43.624 fio-3.35 00:18:43.624 Starting 1 thread 00:18:50.186 00:18:50.186 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70854: Fri Dec 6 13:13:36 2024 00:18:50.186 read: IOPS=29.7k, BW=116MiB/s (122MB/s)(581MiB/5001msec) 00:18:50.186 slat (usec): min=5, max=604, avg=29.91, stdev=28.02 00:18:50.186 clat (usec): min=94, max=5272, avg=1205.33, stdev=647.24 00:18:50.186 lat (usec): min=163, max=5374, avg=1235.24, stdev=649.08 00:18:50.186 clat percentiles (usec): 00:18:50.186 | 1.00th=[ 223], 5.00th=[ 330], 10.00th=[ 433], 20.00th=[ 619], 00:18:50.186 | 30.00th=[ 783], 40.00th=[ 938], 50.00th=[ 1106], 60.00th=[ 1303], 00:18:50.186 | 70.00th=[ 1500], 80.00th=[ 1745], 90.00th=[ 2089], 95.00th=[ 2409], 00:18:50.186 | 99.00th=[ 2933], 99.50th=[ 3228], 99.90th=[ 3982], 99.95th=[ 4178], 00:18:50.186 | 99.99th=[ 4686] 00:18:50.186 bw ( KiB/s): min=94208, max=146640, per=100.00%, avg=118967.11, stdev=16558.39, samples=9 00:18:50.186 iops : min=23552, max=36660, avg=29741.78, stdev=4139.60, samples=9 00:18:50.186 lat (usec) : 100=0.01%, 250=1.76%, 500=11.81%, 750=14.33%, 1000=15.71% 00:18:50.186 lat (msec) : 2=44.34%, 4=11.96%, 10=0.10% 00:18:50.186 cpu : usr=24.98%, sys=54.46%, ctx=91, majf=0, minf=764 00:18:50.186 IO depths : 1=0.1%, 2=1.4%, 4=4.8%, 8=11.8%, 16=25.9%, 32=54.2%, >=64=1.7% 00:18:50.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.186 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:50.186 issued rwts: total=148702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.186 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:50.186 00:18:50.186 Run status group 0 (all jobs): 00:18:50.186 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=581MiB (609MB), run=5001-5001msec 00:18:50.755 ----------------------------------------------------- 00:18:50.755 Suppressions used: 00:18:50.755 count bytes template 00:18:50.755 1 11 /usr/src/fio/parse.c 00:18:50.755 1 8 libtcmalloc_minimal.so 00:18:50.755 1 904 libcrypto.so 00:18:50.755 ----------------------------------------------------- 00:18:50.755 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:50.755 13:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:50.755 { 00:18:50.755 "subsystems": [ 00:18:50.755 { 00:18:50.755 "subsystem": "bdev", 00:18:50.755 "config": [ 00:18:50.755 { 00:18:50.755 "params": { 00:18:50.755 "io_mechanism": "libaio", 00:18:50.755 "conserve_cpu": false, 00:18:50.755 "filename": "/dev/nvme0n1", 00:18:50.755 "name": "xnvme_bdev" 00:18:50.755 }, 00:18:50.755 "method": "bdev_xnvme_create" 00:18:50.755 }, 00:18:50.755 { 00:18:50.755 "method": "bdev_wait_for_examine" 00:18:50.755 } 00:18:50.755 ] 00:18:50.755 } 00:18:50.755 ] 00:18:50.755 } 00:18:51.015 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:51.015 fio-3.35 00:18:51.015 Starting 1 thread 00:18:57.641 00:18:57.641 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70946: Fri Dec 6 13:13:43 2024 00:18:57.641 write: IOPS=25.2k, BW=98.4MiB/s (103MB/s)(492MiB/5001msec); 0 zone resets 00:18:57.641 slat (usec): min=5, max=685, avg=35.56, stdev=27.98 00:18:57.641 clat (usec): min=114, max=5474, avg=1391.59, stdev=763.50 00:18:57.641 lat (usec): min=174, max=5532, avg=1427.15, stdev=766.10 00:18:57.641 clat percentiles (usec): 00:18:57.641 | 1.00th=[ 239], 5.00th=[ 347], 10.00th=[ 457], 20.00th=[ 668], 00:18:57.641 | 30.00th=[ 881], 40.00th=[ 1090], 50.00th=[ 1303], 60.00th=[ 1532], 00:18:57.641 | 70.00th=[ 1778], 80.00th=[ 2057], 90.00th=[ 2442], 95.00th=[ 2704], 00:18:57.641 | 99.00th=[ 3490], 99.50th=[ 3851], 99.90th=[ 4490], 99.95th=[ 4621], 00:18:57.641 | 99.99th=[ 5014] 00:18:57.641 bw ( KiB/s): min=89728, max=112384, per=98.97%, avg=99702.44, stdev=8067.25, samples=9 00:18:57.641 iops : min=22432, max=28096, avg=24925.56, stdev=2016.73, samples=9 00:18:57.641 lat (usec) : 250=1.33%, 500=10.64%, 750=11.73%, 1000=12.21% 00:18:57.641 lat (msec) : 2=42.04%, 4=21.68%, 10=0.36% 00:18:57.641 cpu : usr=24.66%, sys=54.64%, ctx=56, majf=0, minf=745 00:18:57.641 IO depths : 1=0.1%, 2=1.6%, 4=5.4%, 8=12.5%, 16=26.0%, 32=52.8%, >=64=1.7% 00:18:57.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.641 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:57.641 issued rwts: total=0,125946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.641 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.641 00:18:57.641 Run status group 0 (all jobs): 00:18:57.641 WRITE: bw=98.4MiB/s (103MB/s), 98.4MiB/s-98.4MiB/s (103MB/s-103MB/s), io=492MiB (516MB), run=5001-5001msec 00:18:58.206 ----------------------------------------------------- 00:18:58.206 Suppressions used: 00:18:58.206 count bytes template 00:18:58.206 1 11 /usr/src/fio/parse.c 00:18:58.206 1 8 libtcmalloc_minimal.so 00:18:58.206 1 904 libcrypto.so 00:18:58.206 ----------------------------------------------------- 00:18:58.206 00:18:58.206 ************************************ 00:18:58.206 END TEST xnvme_fio_plugin 00:18:58.206 ************************************ 00:18:58.206 00:18:58.206 real 0m14.914s 00:18:58.206 user 0m6.244s 00:18:58.206 sys 0m6.250s 00:18:58.206 13:13:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.206 13:13:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:58.206 13:13:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:58.206 13:13:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:58.206 13:13:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:58.206 13:13:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:58.206 13:13:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.206 13:13:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.206 13:13:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:58.206 ************************************ 00:18:58.206 START TEST xnvme_rpc 00:18:58.206 ************************************ 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71038 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71038 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71038 ']' 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.206 13:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.464 [2024-12-06 13:13:45.293311] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:58.464 [2024-12-06 13:13:45.293734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71038 ] 00:18:58.464 [2024-12-06 13:13:45.476891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.722 [2024-12-06 13:13:45.603121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.655 xnvme_bdev 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.655 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71038 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71038 ']' 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71038 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71038 00:18:59.912 killing process with pid 71038 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71038' 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71038 00:18:59.912 13:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71038 00:19:02.493 ************************************ 00:19:02.493 END TEST xnvme_rpc 00:19:02.493 ************************************ 00:19:02.493 00:19:02.493 real 0m3.707s 00:19:02.493 user 0m3.819s 00:19:02.493 sys 0m0.576s 00:19:02.493 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.493 13:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.493 13:13:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:02.493 13:13:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:02.493 13:13:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.493 13:13:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.493 ************************************ 00:19:02.493 START TEST xnvme_bdevperf 00:19:02.493 ************************************ 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:02.493 13:13:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:02.493 { 00:19:02.493 "subsystems": [ 00:19:02.493 { 00:19:02.493 "subsystem": "bdev", 00:19:02.493 "config": [ 00:19:02.493 { 00:19:02.493 "params": { 00:19:02.493 "io_mechanism": "libaio", 00:19:02.493 "conserve_cpu": true, 00:19:02.493 "filename": "/dev/nvme0n1", 00:19:02.493 "name": "xnvme_bdev" 00:19:02.493 }, 00:19:02.493 "method": "bdev_xnvme_create" 00:19:02.493 }, 00:19:02.493 { 00:19:02.493 "method": "bdev_wait_for_examine" 00:19:02.493 } 00:19:02.493 ] 00:19:02.493 } 00:19:02.493 ] 00:19:02.493 } 00:19:02.493 [2024-12-06 13:13:49.059775] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:02.494 [2024-12-06 13:13:49.060207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71112 ] 00:19:02.494 [2024-12-06 13:13:49.248648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.494 [2024-12-06 13:13:49.384451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.752 Running I/O for 5 seconds... 00:19:05.080 30946.00 IOPS, 120.88 MiB/s [2024-12-06T13:13:53.030Z] 32120.00 IOPS, 125.47 MiB/s [2024-12-06T13:13:53.965Z] 31762.67 IOPS, 124.07 MiB/s [2024-12-06T13:13:54.901Z] 31294.75 IOPS, 122.25 MiB/s 00:19:07.885 Latency(us) 00:19:07.885 [2024-12-06T13:13:54.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.885 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:07.885 xnvme_bdev : 5.00 31071.63 121.37 0.00 0.00 2054.90 253.21 4438.57 00:19:07.885 [2024-12-06T13:13:54.901Z] =================================================================================================================== 00:19:07.885 [2024-12-06T13:13:54.901Z] Total : 31071.63 121.37 0.00 0.00 2054.90 253.21 4438.57 00:19:08.819 13:13:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:08.819 13:13:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:08.819 13:13:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:08.819 13:13:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:08.819 13:13:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:09.078 { 00:19:09.078 "subsystems": [ 00:19:09.078 { 00:19:09.078 "subsystem": "bdev", 00:19:09.078 "config": [ 00:19:09.078 { 00:19:09.078 "params": { 00:19:09.078 "io_mechanism": "libaio", 00:19:09.078 "conserve_cpu": true, 00:19:09.078 "filename": "/dev/nvme0n1", 00:19:09.078 "name": "xnvme_bdev" 00:19:09.078 }, 00:19:09.078 "method": "bdev_xnvme_create" 00:19:09.078 }, 00:19:09.078 { 00:19:09.078 "method": "bdev_wait_for_examine" 00:19:09.078 } 00:19:09.078 ] 00:19:09.078 } 00:19:09.078 ] 00:19:09.078 } 00:19:09.078 [2024-12-06 13:13:55.919303] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:09.078 [2024-12-06 13:13:55.919817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71193 ] 00:19:09.336 [2024-12-06 13:13:56.110253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.336 [2024-12-06 13:13:56.243121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.595 Running I/O for 5 seconds... 00:19:11.903 31546.00 IOPS, 123.23 MiB/s [2024-12-06T13:13:59.973Z] 31506.50 IOPS, 123.07 MiB/s [2024-12-06T13:14:00.908Z] 31103.00 IOPS, 121.50 MiB/s [2024-12-06T13:14:01.844Z] 31825.50 IOPS, 124.32 MiB/s 00:19:14.828 Latency(us) 00:19:14.828 [2024-12-06T13:14:01.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.828 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:14.828 xnvme_bdev : 5.00 31289.10 122.22 0.00 0.00 2040.21 223.42 4140.68 00:19:14.828 [2024-12-06T13:14:01.844Z] =================================================================================================================== 00:19:14.828 [2024-12-06T13:14:01.844Z] Total : 31289.10 122.22 0.00 0.00 2040.21 223.42 4140.68 00:19:15.762 00:19:15.762 real 0m13.705s 00:19:15.762 user 0m5.258s 00:19:15.762 sys 0m6.085s 00:19:15.762 ************************************ 00:19:15.762 END TEST xnvme_bdevperf 00:19:15.762 ************************************ 00:19:15.762 13:14:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.762 13:14:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:15.762 13:14:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:15.763 13:14:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.763 13:14:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.763 13:14:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.763 ************************************ 00:19:15.763 START TEST xnvme_fio_plugin 00:19:15.763 ************************************ 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:15.763 13:14:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:15.763 { 00:19:15.763 "subsystems": [ 00:19:15.763 { 00:19:15.763 "subsystem": "bdev", 00:19:15.763 "config": [ 00:19:15.763 { 00:19:15.763 "params": { 00:19:15.763 "io_mechanism": "libaio", 00:19:15.763 "conserve_cpu": true, 00:19:15.763 "filename": "/dev/nvme0n1", 00:19:15.763 "name": "xnvme_bdev" 00:19:15.763 }, 00:19:15.763 "method": "bdev_xnvme_create" 00:19:15.763 }, 00:19:15.763 { 00:19:15.763 "method": "bdev_wait_for_examine" 00:19:15.763 } 00:19:15.763 ] 00:19:15.763 } 00:19:15.763 ] 00:19:15.763 } 00:19:16.034 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:16.034 fio-3.35 00:19:16.034 Starting 1 thread 00:19:22.589 00:19:22.589 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71321: Fri Dec 6 13:14:08 2024 00:19:22.589 read: IOPS=24.6k, BW=96.3MiB/s (101MB/s)(481MiB/5001msec) 00:19:22.589 slat (usec): min=5, max=780, avg=36.36, stdev=28.30 00:19:22.589 clat (usec): min=40, max=6082, avg=1425.57, stdev=795.25 00:19:22.589 lat (usec): min=144, max=6151, avg=1461.92, stdev=798.37 00:19:22.589 clat percentiles (usec): 00:19:22.589 | 1.00th=[ 239], 5.00th=[ 351], 10.00th=[ 469], 20.00th=[ 685], 00:19:22.589 | 30.00th=[ 898], 40.00th=[ 1106], 50.00th=[ 1303], 60.00th=[ 1532], 00:19:22.589 | 70.00th=[ 1795], 80.00th=[ 2114], 90.00th=[ 2507], 95.00th=[ 2835], 00:19:22.589 | 99.00th=[ 3589], 99.50th=[ 3982], 99.90th=[ 4686], 99.95th=[ 4883], 00:19:22.589 | 99.99th=[ 5276] 00:19:22.589 bw ( KiB/s): min=81968, max=122456, per=100.00%, avg=99366.22, stdev=13146.96, samples=9 00:19:22.589 iops : min=20492, max=30614, avg=24841.56, stdev=3286.74, samples=9 00:19:22.589 lat (usec) : 50=0.01%, 250=1.28%, 500=10.12%, 750=11.57%, 1000=12.04% 00:19:22.589 lat (msec) : 2=41.32%, 4=23.18%, 10=0.49% 00:19:22.589 cpu : usr=24.98%, sys=53.42%, ctx=76, majf=0, minf=707 00:19:22.589 IO depths : 1=0.1%, 2=1.6%, 4=5.4%, 8=12.1%, 16=25.8%, 32=53.3%, >=64=1.7% 00:19:22.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.589 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:22.589 issued rwts: total=123255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.589 00:19:22.589 Run status group 0 (all jobs): 00:19:22.589 READ: bw=96.3MiB/s (101MB/s), 96.3MiB/s-96.3MiB/s (101MB/s-101MB/s), io=481MiB (505MB), run=5001-5001msec 00:19:23.155 ----------------------------------------------------- 00:19:23.155 Suppressions used: 00:19:23.155 count bytes template 00:19:23.155 1 11 /usr/src/fio/parse.c 00:19:23.155 1 8 libtcmalloc_minimal.so 00:19:23.155 1 904 libcrypto.so 00:19:23.155 ----------------------------------------------------- 00:19:23.155 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:23.155 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:23.412 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:23.413 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:23.413 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:23.413 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.413 13:14:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:23.413 { 00:19:23.413 "subsystems": [ 00:19:23.413 { 00:19:23.413 "subsystem": "bdev", 00:19:23.413 "config": [ 00:19:23.413 { 00:19:23.413 "params": { 00:19:23.413 "io_mechanism": "libaio", 00:19:23.413 "conserve_cpu": true, 00:19:23.413 "filename": "/dev/nvme0n1", 00:19:23.413 "name": "xnvme_bdev" 00:19:23.413 }, 00:19:23.413 "method": "bdev_xnvme_create" 00:19:23.413 }, 00:19:23.413 { 00:19:23.413 "method": "bdev_wait_for_examine" 00:19:23.413 } 00:19:23.413 ] 00:19:23.413 } 00:19:23.413 ] 00:19:23.413 } 00:19:23.413 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:23.413 fio-3.35 00:19:23.413 Starting 1 thread 00:19:29.971 00:19:29.971 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71413: Fri Dec 6 13:14:16 2024 00:19:29.971 write: IOPS=25.3k, BW=98.8MiB/s (104MB/s)(494MiB/5001msec); 0 zone resets 00:19:29.971 slat (usec): min=5, max=880, avg=35.32, stdev=28.86 00:19:29.971 clat (usec): min=115, max=5758, avg=1395.93, stdev=750.14 00:19:29.971 lat (usec): min=182, max=5838, avg=1431.25, stdev=752.19 00:19:29.971 clat percentiles (usec): 00:19:29.971 | 1.00th=[ 245], 5.00th=[ 355], 10.00th=[ 469], 20.00th=[ 693], 00:19:29.971 | 30.00th=[ 898], 40.00th=[ 1106], 50.00th=[ 1319], 60.00th=[ 1549], 00:19:29.971 | 70.00th=[ 1778], 80.00th=[ 2040], 90.00th=[ 2376], 95.00th=[ 2671], 00:19:29.971 | 99.00th=[ 3589], 99.50th=[ 3884], 99.90th=[ 4424], 99.95th=[ 4621], 00:19:29.971 | 99.99th=[ 5080] 00:19:29.971 bw ( KiB/s): min=88472, max=113304, per=100.00%, avg=101493.00, stdev=8437.79, samples=9 00:19:29.971 iops : min=22118, max=28326, avg=25373.22, stdev=2109.49, samples=9 00:19:29.971 lat (usec) : 250=1.10%, 500=10.28%, 750=11.52%, 1000=11.84% 00:19:29.971 lat (msec) : 2=44.08%, 4=20.80%, 10=0.38% 00:19:29.971 cpu : usr=24.42%, sys=55.00%, ctx=105, majf=0, minf=627 00:19:29.971 IO depths : 1=0.1%, 2=1.5%, 4=5.3%, 8=12.4%, 16=26.0%, 32=52.9%, >=64=1.7% 00:19:29.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.971 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:29.971 issued rwts: total=0,126523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.971 00:19:29.971 Run status group 0 (all jobs): 00:19:29.971 WRITE: bw=98.8MiB/s (104MB/s), 98.8MiB/s-98.8MiB/s (104MB/s-104MB/s), io=494MiB (518MB), run=5001-5001msec 00:19:30.918 ----------------------------------------------------- 00:19:30.918 Suppressions used: 00:19:30.918 count bytes template 00:19:30.918 1 11 /usr/src/fio/parse.c 00:19:30.918 1 8 libtcmalloc_minimal.so 00:19:30.918 1 904 libcrypto.so 00:19:30.918 ----------------------------------------------------- 00:19:30.918 00:19:30.918 00:19:30.918 real 0m14.906s 00:19:30.918 user 0m6.249s 00:19:30.918 sys 0m6.211s 00:19:30.918 13:14:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.918 13:14:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:30.918 ************************************ 00:19:30.918 END TEST xnvme_fio_plugin 00:19:30.918 ************************************ 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:30.918 13:14:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:30.918 13:14:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:30.918 13:14:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.918 13:14:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:30.918 ************************************ 00:19:30.918 START TEST xnvme_rpc 00:19:30.918 ************************************ 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71502 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71502 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71502 ']' 00:19:30.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.918 13:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:30.918 [2024-12-06 13:14:17.786941] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:30.918 [2024-12-06 13:14:17.787137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71502 ] 00:19:31.176 [2024-12-06 13:14:17.963664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.176 [2024-12-06 13:14:18.094445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.108 xnvme_bdev 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:32.108 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71502 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71502 ']' 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71502 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71502 00:19:32.365 killing process with pid 71502 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71502' 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71502 00:19:32.365 13:14:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71502 00:19:34.891 ************************************ 00:19:34.891 END TEST xnvme_rpc 00:19:34.891 ************************************ 00:19:34.891 00:19:34.891 real 0m3.804s 00:19:34.891 user 0m3.993s 00:19:34.891 sys 0m0.545s 00:19:34.891 13:14:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.891 13:14:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:34.891 13:14:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:34.891 13:14:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:34.891 13:14:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.891 13:14:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.891 ************************************ 00:19:34.891 START TEST xnvme_bdevperf 00:19:34.891 ************************************ 00:19:34.891 13:14:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:34.891 13:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:34.891 13:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:19:34.891 13:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:34.891 13:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:34.891 13:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:34.892 13:14:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:34.892 13:14:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:34.892 { 00:19:34.892 "subsystems": [ 00:19:34.892 { 00:19:34.892 "subsystem": "bdev", 00:19:34.892 "config": [ 00:19:34.892 { 00:19:34.892 "params": { 00:19:34.892 "io_mechanism": "io_uring", 00:19:34.892 "conserve_cpu": false, 00:19:34.892 "filename": "/dev/nvme0n1", 00:19:34.892 "name": "xnvme_bdev" 00:19:34.892 }, 00:19:34.892 "method": "bdev_xnvme_create" 00:19:34.892 }, 00:19:34.892 { 00:19:34.892 "method": "bdev_wait_for_examine" 00:19:34.892 } 00:19:34.892 ] 00:19:34.892 } 00:19:34.892 ] 00:19:34.892 } 00:19:34.892 [2024-12-06 13:14:21.628756] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:34.892 [2024-12-06 13:14:21.628916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71586 ] 00:19:34.892 [2024-12-06 13:14:21.801427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.149 [2024-12-06 13:14:21.934658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.407 Running I/O for 5 seconds... 00:19:37.277 57792.00 IOPS, 225.75 MiB/s [2024-12-06T13:14:25.670Z] 59008.00 IOPS, 230.50 MiB/s [2024-12-06T13:14:26.604Z] 58666.67 IOPS, 229.17 MiB/s [2024-12-06T13:14:27.539Z] 57760.00 IOPS, 225.62 MiB/s [2024-12-06T13:14:27.539Z] 57459.20 IOPS, 224.45 MiB/s 00:19:40.523 Latency(us) 00:19:40.523 [2024-12-06T13:14:27.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.523 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:40.523 xnvme_bdev : 5.00 57446.25 224.40 0.00 0.00 1110.67 733.56 2934.23 00:19:40.523 [2024-12-06T13:14:27.539Z] =================================================================================================================== 00:19:40.523 [2024-12-06T13:14:27.539Z] Total : 57446.25 224.40 0.00 0.00 1110.67 733.56 2934.23 00:19:41.457 13:14:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:41.457 13:14:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:41.457 13:14:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:41.457 13:14:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:41.457 13:14:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:41.457 { 00:19:41.457 "subsystems": [ 00:19:41.457 { 00:19:41.457 "subsystem": "bdev", 00:19:41.457 "config": [ 00:19:41.457 { 00:19:41.457 "params": { 00:19:41.457 "io_mechanism": "io_uring", 00:19:41.457 "conserve_cpu": false, 00:19:41.457 "filename": "/dev/nvme0n1", 00:19:41.457 "name": "xnvme_bdev" 00:19:41.457 }, 00:19:41.457 "method": "bdev_xnvme_create" 00:19:41.457 }, 00:19:41.457 { 00:19:41.457 "method": "bdev_wait_for_examine" 00:19:41.457 } 00:19:41.457 ] 00:19:41.457 } 00:19:41.458 ] 00:19:41.458 } 00:19:41.458 [2024-12-06 13:14:28.446541] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:41.458 [2024-12-06 13:14:28.446733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71663 ] 00:19:41.715 [2024-12-06 13:14:28.632035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.975 [2024-12-06 13:14:28.753094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.234 Running I/O for 5 seconds... 00:19:44.103 54656.00 IOPS, 213.50 MiB/s [2024-12-06T13:14:32.495Z] 51968.00 IOPS, 203.00 MiB/s [2024-12-06T13:14:33.119Z] 49344.00 IOPS, 192.75 MiB/s [2024-12-06T13:14:34.492Z] 47872.00 IOPS, 187.00 MiB/s [2024-12-06T13:14:34.492Z] 47129.60 IOPS, 184.10 MiB/s 00:19:47.476 Latency(us) 00:19:47.476 [2024-12-06T13:14:34.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.477 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:47.477 xnvme_bdev : 5.00 47109.55 184.02 0.00 0.00 1354.10 822.92 8102.63 00:19:47.477 [2024-12-06T13:14:34.493Z] =================================================================================================================== 00:19:47.477 [2024-12-06T13:14:34.493Z] Total : 47109.55 184.02 0.00 0.00 1354.10 822.92 8102.63 00:19:48.439 00:19:48.439 real 0m13.639s 00:19:48.439 user 0m7.094s 00:19:48.439 sys 0m6.340s 00:19:48.439 ************************************ 00:19:48.439 END TEST xnvme_bdevperf 00:19:48.439 ************************************ 00:19:48.439 13:14:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.439 13:14:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:48.439 13:14:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:48.439 13:14:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.439 13:14:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.439 13:14:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:48.439 ************************************ 00:19:48.439 START TEST xnvme_fio_plugin 00:19:48.439 ************************************ 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:48.439 13:14:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:48.439 { 00:19:48.439 "subsystems": [ 00:19:48.439 { 00:19:48.439 "subsystem": "bdev", 00:19:48.439 "config": [ 00:19:48.439 { 00:19:48.439 "params": { 00:19:48.439 "io_mechanism": "io_uring", 00:19:48.439 "conserve_cpu": false, 00:19:48.439 "filename": "/dev/nvme0n1", 00:19:48.439 "name": "xnvme_bdev" 00:19:48.439 }, 00:19:48.439 "method": "bdev_xnvme_create" 00:19:48.439 }, 00:19:48.439 { 00:19:48.439 "method": "bdev_wait_for_examine" 00:19:48.439 } 00:19:48.439 ] 00:19:48.439 } 00:19:48.439 ] 00:19:48.439 } 00:19:48.697 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:48.697 fio-3.35 00:19:48.697 Starting 1 thread 00:19:55.253 00:19:55.253 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71788: Fri Dec 6 13:14:41 2024 00:19:55.253 read: IOPS=43.4k, BW=169MiB/s (178MB/s)(848MiB/5001msec) 00:19:55.253 slat (usec): min=2, max=1018, avg= 4.41, stdev= 3.14 00:19:55.253 clat (usec): min=542, max=2985, avg=1299.10, stdev=176.46 00:19:55.253 lat (usec): min=546, max=2995, avg=1303.51, stdev=177.15 00:19:55.253 clat percentiles (usec): 00:19:55.253 | 1.00th=[ 1012], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1156], 00:19:55.253 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[ 1270], 60.00th=[ 1319], 00:19:55.253 | 70.00th=[ 1352], 80.00th=[ 1418], 90.00th=[ 1532], 95.00th=[ 1647], 00:19:55.253 | 99.00th=[ 1844], 99.50th=[ 1926], 99.90th=[ 2147], 99.95th=[ 2245], 00:19:55.253 | 99.99th=[ 2900] 00:19:55.253 bw ( KiB/s): min=159232, max=178688, per=99.81%, avg=173226.67, stdev=6014.64, samples=9 00:19:55.253 iops : min=39808, max=44672, avg=43306.67, stdev=1503.66, samples=9 00:19:55.253 lat (usec) : 750=0.01%, 1000=0.66% 00:19:55.253 lat (msec) : 2=99.08%, 4=0.25% 00:19:55.253 cpu : usr=35.14%, sys=63.70%, ctx=7, majf=0, minf=762 00:19:55.253 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:55.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.253 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:55.253 issued rwts: total=216987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.253 00:19:55.253 Run status group 0 (all jobs): 00:19:55.253 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=848MiB (889MB), run=5001-5001msec 00:19:55.821 ----------------------------------------------------- 00:19:55.821 Suppressions used: 00:19:55.821 count bytes template 00:19:55.821 1 11 /usr/src/fio/parse.c 00:19:55.821 1 8 libtcmalloc_minimal.so 00:19:55.821 1 904 libcrypto.so 00:19:55.821 ----------------------------------------------------- 00:19:55.821 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.821 13:14:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.821 { 00:19:55.821 "subsystems": [ 00:19:55.821 { 00:19:55.821 "subsystem": "bdev", 00:19:55.821 "config": [ 00:19:55.821 { 00:19:55.821 "params": { 00:19:55.821 "io_mechanism": "io_uring", 00:19:55.821 "conserve_cpu": false, 00:19:55.821 "filename": "/dev/nvme0n1", 00:19:55.821 "name": "xnvme_bdev" 00:19:55.821 }, 00:19:55.821 "method": "bdev_xnvme_create" 00:19:55.821 }, 00:19:55.821 { 00:19:55.821 "method": "bdev_wait_for_examine" 00:19:55.821 } 00:19:55.821 ] 00:19:55.821 } 00:19:55.821 ] 00:19:55.821 } 00:19:56.080 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:56.080 fio-3.35 00:19:56.080 Starting 1 thread 00:20:02.723 00:20:02.723 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71880: Fri Dec 6 13:14:48 2024 00:20:02.723 write: IOPS=41.9k, BW=164MiB/s (172MB/s)(819MiB/5001msec); 0 zone resets 00:20:02.723 slat (usec): min=2, max=102, avg= 4.98, stdev= 2.59 00:20:02.723 clat (usec): min=902, max=2968, avg=1330.64, stdev=213.19 00:20:02.723 lat (usec): min=905, max=2988, avg=1335.62, stdev=214.14 00:20:02.723 clat percentiles (usec): 00:20:02.723 | 1.00th=[ 1012], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1156], 00:20:02.723 | 30.00th=[ 1205], 40.00th=[ 1254], 50.00th=[ 1287], 60.00th=[ 1336], 00:20:02.723 | 70.00th=[ 1385], 80.00th=[ 1467], 90.00th=[ 1598], 95.00th=[ 1745], 00:20:02.723 | 99.00th=[ 2024], 99.50th=[ 2212], 99.90th=[ 2638], 99.95th=[ 2737], 00:20:02.723 | 99.99th=[ 2868] 00:20:02.723 bw ( KiB/s): min=160256, max=174754, per=99.44%, avg=166664.22, stdev=5401.84, samples=9 00:20:02.723 iops : min=40064, max=43688, avg=41666.00, stdev=1350.37, samples=9 00:20:02.723 lat (usec) : 1000=0.70% 00:20:02.723 lat (msec) : 2=98.18%, 4=1.12% 00:20:02.723 cpu : usr=37.83%, sys=61.05%, ctx=9, majf=0, minf=763 00:20:02.723 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:02.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.723 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:02.723 issued rwts: total=0,209536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:02.723 00:20:02.723 Run status group 0 (all jobs): 00:20:02.723 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=819MiB (858MB), run=5001-5001msec 00:20:03.290 ----------------------------------------------------- 00:20:03.290 Suppressions used: 00:20:03.290 count bytes template 00:20:03.290 1 11 /usr/src/fio/parse.c 00:20:03.290 1 8 libtcmalloc_minimal.so 00:20:03.290 1 904 libcrypto.so 00:20:03.290 ----------------------------------------------------- 00:20:03.290 00:20:03.290 00:20:03.290 real 0m14.823s 00:20:03.290 user 0m7.447s 00:20:03.290 sys 0m6.980s 00:20:03.290 13:14:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.290 ************************************ 00:20:03.290 END TEST xnvme_fio_plugin 00:20:03.290 ************************************ 00:20:03.290 13:14:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 13:14:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:03.290 13:14:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:03.290 13:14:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:03.290 13:14:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:03.290 13:14:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:03.290 13:14:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.290 13:14:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 ************************************ 00:20:03.290 START TEST xnvme_rpc 00:20:03.290 ************************************ 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71966 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71966 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71966 ']' 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.290 13:14:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 [2024-12-06 13:14:50.222144] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:03.290 [2024-12-06 13:14:50.222673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:20:03.548 [2024-12-06 13:14:50.409726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.548 [2024-12-06 13:14:50.554506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 xnvme_bdev 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.485 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71966 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71966 ']' 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71966 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71966 00:20:04.743 killing process with pid 71966 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71966' 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71966 00:20:04.743 13:14:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71966 00:20:07.293 ************************************ 00:20:07.293 END TEST xnvme_rpc 00:20:07.293 ************************************ 00:20:07.293 00:20:07.293 real 0m3.782s 00:20:07.293 user 0m3.931s 00:20:07.293 sys 0m0.595s 00:20:07.293 13:14:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.294 13:14:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:07.294 13:14:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:07.294 13:14:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.294 13:14:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.294 13:14:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:07.294 ************************************ 00:20:07.294 START TEST xnvme_bdevperf 00:20:07.294 ************************************ 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:07.294 13:14:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:07.294 { 00:20:07.294 "subsystems": [ 00:20:07.294 { 00:20:07.294 "subsystem": "bdev", 00:20:07.294 "config": [ 00:20:07.294 { 00:20:07.294 "params": { 00:20:07.294 "io_mechanism": "io_uring", 00:20:07.294 "conserve_cpu": true, 00:20:07.294 "filename": "/dev/nvme0n1", 00:20:07.294 "name": "xnvme_bdev" 00:20:07.294 }, 00:20:07.294 "method": "bdev_xnvme_create" 00:20:07.294 }, 00:20:07.294 { 00:20:07.294 "method": "bdev_wait_for_examine" 00:20:07.294 } 00:20:07.294 ] 00:20:07.294 } 00:20:07.294 ] 00:20:07.294 } 00:20:07.294 [2024-12-06 13:14:54.024211] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:07.294 [2024-12-06 13:14:54.024359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72046 ] 00:20:07.294 [2024-12-06 13:14:54.200429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.552 [2024-12-06 13:14:54.323792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.810 Running I/O for 5 seconds... 00:20:09.677 47744.00 IOPS, 186.50 MiB/s [2024-12-06T13:14:58.073Z] 45392.00 IOPS, 177.31 MiB/s [2024-12-06T13:14:59.009Z] 45984.00 IOPS, 179.62 MiB/s [2024-12-06T13:14:59.945Z] 45032.00 IOPS, 175.91 MiB/s 00:20:12.929 Latency(us) 00:20:12.929 [2024-12-06T13:14:59.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.929 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:12.929 xnvme_bdev : 5.00 44945.18 175.57 0.00 0.00 1419.40 826.65 4885.41 00:20:12.929 [2024-12-06T13:14:59.945Z] =================================================================================================================== 00:20:12.929 [2024-12-06T13:14:59.945Z] Total : 44945.18 175.57 0.00 0.00 1419.40 826.65 4885.41 00:20:13.890 13:15:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:13.890 13:15:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:13.890 13:15:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:13.890 13:15:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:13.890 13:15:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:13.890 { 00:20:13.890 "subsystems": [ 00:20:13.890 { 00:20:13.890 "subsystem": "bdev", 00:20:13.890 "config": [ 00:20:13.890 { 00:20:13.890 "params": { 00:20:13.890 "io_mechanism": "io_uring", 00:20:13.890 "conserve_cpu": true, 00:20:13.890 "filename": "/dev/nvme0n1", 00:20:13.890 "name": "xnvme_bdev" 00:20:13.890 }, 00:20:13.890 "method": "bdev_xnvme_create" 00:20:13.890 }, 00:20:13.890 { 00:20:13.890 "method": "bdev_wait_for_examine" 00:20:13.890 } 00:20:13.890 ] 00:20:13.890 } 00:20:13.890 ] 00:20:13.890 } 00:20:13.890 [2024-12-06 13:15:00.876424] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:13.890 [2024-12-06 13:15:00.876593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72127 ] 00:20:14.148 [2024-12-06 13:15:01.073057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.417 [2024-12-06 13:15:01.206029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.727 Running I/O for 5 seconds... 00:20:16.595 41664.00 IOPS, 162.75 MiB/s [2024-12-06T13:15:04.986Z] 41529.00 IOPS, 162.22 MiB/s [2024-12-06T13:15:05.919Z] 40294.00 IOPS, 157.40 MiB/s [2024-12-06T13:15:06.853Z] 39355.50 IOPS, 153.73 MiB/s 00:20:19.837 Latency(us) 00:20:19.837 [2024-12-06T13:15:06.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.837 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:19.837 xnvme_bdev : 5.00 38835.27 151.70 0.00 0.00 1642.00 487.80 5808.87 00:20:19.837 [2024-12-06T13:15:06.853Z] =================================================================================================================== 00:20:19.837 [2024-12-06T13:15:06.853Z] Total : 38835.27 151.70 0.00 0.00 1642.00 487.80 5808.87 00:20:20.774 00:20:20.774 real 0m13.808s 00:20:20.774 user 0m8.012s 00:20:20.774 sys 0m5.254s 00:20:20.774 13:15:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.774 13:15:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:20.774 ************************************ 00:20:20.774 END TEST xnvme_bdevperf 00:20:20.774 ************************************ 00:20:20.774 13:15:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:20.774 13:15:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:20.774 13:15:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.774 13:15:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:21.034 ************************************ 00:20:21.034 START TEST xnvme_fio_plugin 00:20:21.034 ************************************ 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:21.034 13:15:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:21.034 { 00:20:21.034 "subsystems": [ 00:20:21.034 { 00:20:21.034 "subsystem": "bdev", 00:20:21.034 "config": [ 00:20:21.034 { 00:20:21.034 "params": { 00:20:21.034 "io_mechanism": "io_uring", 00:20:21.034 "conserve_cpu": true, 00:20:21.034 "filename": "/dev/nvme0n1", 00:20:21.034 "name": "xnvme_bdev" 00:20:21.034 }, 00:20:21.034 "method": "bdev_xnvme_create" 00:20:21.034 }, 00:20:21.034 { 00:20:21.034 "method": "bdev_wait_for_examine" 00:20:21.034 } 00:20:21.034 ] 00:20:21.034 } 00:20:21.034 ] 00:20:21.034 } 00:20:21.293 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:21.293 fio-3.35 00:20:21.293 Starting 1 thread 00:20:27.891 00:20:27.892 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72246: Fri Dec 6 13:15:13 2024 00:20:27.892 read: IOPS=43.1k, BW=168MiB/s (177MB/s)(842MiB/5001msec) 00:20:27.892 slat (usec): min=2, max=135, avg= 4.66, stdev= 2.49 00:20:27.892 clat (usec): min=134, max=7175, avg=1303.27, stdev=297.30 00:20:27.892 lat (usec): min=154, max=7179, avg=1307.92, stdev=297.70 00:20:27.892 clat percentiles (usec): 00:20:27.892 | 1.00th=[ 816], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1139], 00:20:27.892 | 30.00th=[ 1188], 40.00th=[ 1221], 50.00th=[ 1270], 60.00th=[ 1303], 00:20:27.892 | 70.00th=[ 1352], 80.00th=[ 1418], 90.00th=[ 1532], 95.00th=[ 1680], 00:20:27.892 | 99.00th=[ 2278], 99.50th=[ 3195], 99.90th=[ 4424], 99.95th=[ 4817], 00:20:27.892 | 99.99th=[ 5932] 00:20:27.892 bw ( KiB/s): min=165888, max=182784, per=100.00%, avg=172616.89, stdev=6433.91, samples=9 00:20:27.892 iops : min=41472, max=45696, avg=43154.22, stdev=1608.48, samples=9 00:20:27.892 lat (usec) : 250=0.02%, 500=0.14%, 750=0.48%, 1000=2.73% 00:20:27.892 lat (msec) : 2=95.04%, 4=1.37%, 10=0.21% 00:20:27.892 cpu : usr=47.32%, sys=47.62%, ctx=27, majf=0, minf=762 00:20:27.892 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=24.7%, 32=51.5%, >=64=1.7% 00:20:27.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.892 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:27.892 issued rwts: total=215571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:27.892 00:20:27.892 Run status group 0 (all jobs): 00:20:27.892 READ: bw=168MiB/s (177MB/s), 168MiB/s-168MiB/s (177MB/s-177MB/s), io=842MiB (883MB), run=5001-5001msec 00:20:28.555 ----------------------------------------------------- 00:20:28.555 Suppressions used: 00:20:28.555 count bytes template 00:20:28.555 1 11 /usr/src/fio/parse.c 00:20:28.555 1 8 libtcmalloc_minimal.so 00:20:28.555 1 904 libcrypto.so 00:20:28.555 ----------------------------------------------------- 00:20:28.555 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:28.555 13:15:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.555 { 00:20:28.555 "subsystems": [ 00:20:28.555 { 00:20:28.555 "subsystem": "bdev", 00:20:28.555 "config": [ 00:20:28.555 { 00:20:28.555 "params": { 00:20:28.555 "io_mechanism": "io_uring", 00:20:28.555 "conserve_cpu": true, 00:20:28.555 "filename": "/dev/nvme0n1", 00:20:28.555 "name": "xnvme_bdev" 00:20:28.555 }, 00:20:28.555 "method": "bdev_xnvme_create" 00:20:28.555 }, 00:20:28.555 { 00:20:28.555 "method": "bdev_wait_for_examine" 00:20:28.555 } 00:20:28.555 ] 00:20:28.555 } 00:20:28.555 ] 00:20:28.555 } 00:20:28.555 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:28.555 fio-3.35 00:20:28.555 Starting 1 thread 00:20:35.116 00:20:35.116 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72344: Fri Dec 6 13:15:21 2024 00:20:35.116 write: IOPS=41.3k, BW=161MiB/s (169MB/s)(807MiB/5001msec); 0 zone resets 00:20:35.116 slat (usec): min=2, max=109, avg= 5.20, stdev= 2.54 00:20:35.116 clat (usec): min=895, max=3013, avg=1344.49, stdev=197.52 00:20:35.116 lat (usec): min=899, max=3055, avg=1349.69, stdev=198.44 00:20:35.116 clat percentiles (usec): 00:20:35.116 | 1.00th=[ 1037], 5.00th=[ 1090], 10.00th=[ 1123], 20.00th=[ 1188], 00:20:35.116 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1303], 60.00th=[ 1352], 00:20:35.116 | 70.00th=[ 1401], 80.00th=[ 1467], 90.00th=[ 1614], 95.00th=[ 1745], 00:20:35.116 | 99.00th=[ 1942], 99.50th=[ 2024], 99.90th=[ 2245], 99.95th=[ 2409], 00:20:35.116 | 99.99th=[ 2802] 00:20:35.116 bw ( KiB/s): min=161792, max=169472, per=99.94%, avg=165091.56, stdev=2203.85, samples=9 00:20:35.116 iops : min=40448, max=42368, avg=41272.89, stdev=550.96, samples=9 00:20:35.116 lat (usec) : 1000=0.20% 00:20:35.116 lat (msec) : 2=99.20%, 4=0.59% 00:20:35.116 cpu : usr=49.74%, sys=46.12%, ctx=13, majf=0, minf=763 00:20:35.116 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:35.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.116 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:35.116 issued rwts: total=0,206528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.116 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:35.116 00:20:35.116 Run status group 0 (all jobs): 00:20:35.116 WRITE: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=807MiB (846MB), run=5001-5001msec 00:20:36.049 ----------------------------------------------------- 00:20:36.049 Suppressions used: 00:20:36.049 count bytes template 00:20:36.049 1 11 /usr/src/fio/parse.c 00:20:36.049 1 8 libtcmalloc_minimal.so 00:20:36.049 1 904 libcrypto.so 00:20:36.049 ----------------------------------------------------- 00:20:36.049 00:20:36.049 00:20:36.049 real 0m14.948s 00:20:36.049 user 0m8.692s 00:20:36.049 sys 0m5.524s 00:20:36.049 13:15:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.049 13:15:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:36.049 ************************************ 00:20:36.049 END TEST xnvme_fio_plugin 00:20:36.049 ************************************ 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:36.049 13:15:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:36.049 13:15:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.049 13:15:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.049 13:15:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:36.049 ************************************ 00:20:36.049 START TEST xnvme_rpc 00:20:36.049 ************************************ 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:36.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72430 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72430 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72430 ']' 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.049 13:15:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:36.049 [2024-12-06 13:15:22.912503] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:36.049 [2024-12-06 13:15:22.912991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72430 ] 00:20:36.306 [2024-12-06 13:15:23.083417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.306 [2024-12-06 13:15:23.207581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.240 xnvme_bdev 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.240 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.241 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72430 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72430 ']' 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72430 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72430 00:20:37.501 killing process with pid 72430 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72430' 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72430 00:20:37.501 13:15:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72430 00:20:40.031 ************************************ 00:20:40.031 END TEST xnvme_rpc 00:20:40.031 ************************************ 00:20:40.031 00:20:40.031 real 0m3.657s 00:20:40.031 user 0m3.816s 00:20:40.031 sys 0m0.566s 00:20:40.031 13:15:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.031 13:15:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:40.031 13:15:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:40.031 13:15:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.031 13:15:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.031 13:15:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:40.031 ************************************ 00:20:40.031 START TEST xnvme_bdevperf 00:20:40.031 ************************************ 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:40.031 13:15:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:40.031 { 00:20:40.031 "subsystems": [ 00:20:40.031 { 00:20:40.031 "subsystem": "bdev", 00:20:40.031 "config": [ 00:20:40.031 { 00:20:40.031 "params": { 00:20:40.031 "io_mechanism": "io_uring_cmd", 00:20:40.031 "conserve_cpu": false, 00:20:40.031 "filename": "/dev/ng0n1", 00:20:40.031 "name": "xnvme_bdev" 00:20:40.031 }, 00:20:40.031 "method": "bdev_xnvme_create" 00:20:40.031 }, 00:20:40.031 { 00:20:40.031 "method": "bdev_wait_for_examine" 00:20:40.031 } 00:20:40.031 ] 00:20:40.031 } 00:20:40.031 ] 00:20:40.031 } 00:20:40.031 [2024-12-06 13:15:26.644570] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:40.031 [2024-12-06 13:15:26.644755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72510 ] 00:20:40.031 [2024-12-06 13:15:26.831098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.031 [2024-12-06 13:15:26.964956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.595 Running I/O for 5 seconds... 00:20:42.529 49600.00 IOPS, 193.75 MiB/s [2024-12-06T13:15:30.478Z] 49056.00 IOPS, 191.62 MiB/s [2024-12-06T13:15:31.410Z] 48768.00 IOPS, 190.50 MiB/s [2024-12-06T13:15:32.797Z] 48336.00 IOPS, 188.81 MiB/s 00:20:45.781 Latency(us) 00:20:45.781 [2024-12-06T13:15:32.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.781 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:45.781 xnvme_bdev : 5.00 48793.85 190.60 0.00 0.00 1307.22 841.54 3768.32 00:20:45.781 [2024-12-06T13:15:32.797Z] =================================================================================================================== 00:20:45.781 [2024-12-06T13:15:32.797Z] Total : 48793.85 190.60 0.00 0.00 1307.22 841.54 3768.32 00:20:46.759 13:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:46.759 13:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:46.759 13:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:46.759 13:15:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:46.759 13:15:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:46.759 { 00:20:46.759 "subsystems": [ 00:20:46.759 { 00:20:46.759 "subsystem": "bdev", 00:20:46.759 "config": [ 00:20:46.759 { 00:20:46.759 "params": { 00:20:46.759 "io_mechanism": "io_uring_cmd", 00:20:46.759 "conserve_cpu": false, 00:20:46.759 "filename": "/dev/ng0n1", 00:20:46.759 "name": "xnvme_bdev" 00:20:46.759 }, 00:20:46.759 "method": "bdev_xnvme_create" 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "method": "bdev_wait_for_examine" 00:20:46.759 } 00:20:46.759 ] 00:20:46.759 } 00:20:46.759 ] 00:20:46.759 } 00:20:46.759 [2024-12-06 13:15:33.518561] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:46.759 [2024-12-06 13:15:33.519016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72584 ] 00:20:46.759 [2024-12-06 13:15:33.695444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.018 [2024-12-06 13:15:33.824857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.276 Running I/O for 5 seconds... 00:20:49.583 47104.00 IOPS, 184.00 MiB/s [2024-12-06T13:15:37.251Z] 46112.00 IOPS, 180.12 MiB/s [2024-12-06T13:15:38.185Z] 45546.67 IOPS, 177.92 MiB/s [2024-12-06T13:15:39.556Z] 45616.00 IOPS, 178.19 MiB/s 00:20:52.540 Latency(us) 00:20:52.540 [2024-12-06T13:15:39.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.540 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:52.540 xnvme_bdev : 5.00 45291.18 176.92 0.00 0.00 1408.06 893.67 5600.35 00:20:52.540 [2024-12-06T13:15:39.556Z] =================================================================================================================== 00:20:52.540 [2024-12-06T13:15:39.556Z] Total : 45291.18 176.92 0.00 0.00 1408.06 893.67 5600.35 00:20:53.471 13:15:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:53.471 13:15:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:53.471 13:15:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:53.471 13:15:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:53.471 13:15:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.471 { 00:20:53.471 "subsystems": [ 00:20:53.471 { 00:20:53.471 "subsystem": "bdev", 00:20:53.471 "config": [ 00:20:53.471 { 00:20:53.471 "params": { 00:20:53.471 "io_mechanism": "io_uring_cmd", 00:20:53.471 "conserve_cpu": false, 00:20:53.471 "filename": "/dev/ng0n1", 00:20:53.471 "name": "xnvme_bdev" 00:20:53.471 }, 00:20:53.471 "method": "bdev_xnvme_create" 00:20:53.471 }, 00:20:53.471 { 00:20:53.471 "method": "bdev_wait_for_examine" 00:20:53.471 } 00:20:53.471 ] 00:20:53.471 } 00:20:53.471 ] 00:20:53.471 } 00:20:53.471 [2024-12-06 13:15:40.425112] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:53.471 [2024-12-06 13:15:40.425319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:20:53.729 [2024-12-06 13:15:40.612387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.987 [2024-12-06 13:15:40.773931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.245 Running I/O for 5 seconds... 00:20:56.551 68928.00 IOPS, 269.25 MiB/s [2024-12-06T13:15:44.503Z] 64640.00 IOPS, 252.50 MiB/s [2024-12-06T13:15:45.436Z] 64170.67 IOPS, 250.67 MiB/s [2024-12-06T13:15:46.396Z] 65840.00 IOPS, 257.19 MiB/s 00:20:59.380 Latency(us) 00:20:59.380 [2024-12-06T13:15:46.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.380 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:59.380 xnvme_bdev : 5.00 66449.15 259.57 0.00 0.00 958.82 435.67 3470.43 00:20:59.380 [2024-12-06T13:15:46.396Z] =================================================================================================================== 00:20:59.380 [2024-12-06T13:15:46.396Z] Total : 66449.15 259.57 0.00 0.00 958.82 435.67 3470.43 00:21:00.315 13:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:00.315 13:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:00.315 13:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:00.315 13:15:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:00.315 13:15:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:00.573 { 00:21:00.573 "subsystems": [ 00:21:00.573 { 00:21:00.573 "subsystem": "bdev", 00:21:00.573 "config": [ 00:21:00.573 { 00:21:00.573 "params": { 00:21:00.573 "io_mechanism": "io_uring_cmd", 00:21:00.573 "conserve_cpu": false, 00:21:00.573 "filename": "/dev/ng0n1", 00:21:00.573 "name": "xnvme_bdev" 00:21:00.573 }, 00:21:00.573 "method": "bdev_xnvme_create" 00:21:00.573 }, 00:21:00.573 { 00:21:00.573 "method": "bdev_wait_for_examine" 00:21:00.573 } 00:21:00.573 ] 00:21:00.573 } 00:21:00.573 ] 00:21:00.573 } 00:21:00.573 [2024-12-06 13:15:47.400489] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:00.573 [2024-12-06 13:15:47.400672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72747 ] 00:21:00.831 [2024-12-06 13:15:47.594007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.831 [2024-12-06 13:15:47.748847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.396 Running I/O for 5 seconds... 00:21:03.261 37952.00 IOPS, 148.25 MiB/s [2024-12-06T13:15:51.230Z] 41145.00 IOPS, 160.72 MiB/s [2024-12-06T13:15:52.161Z] 40855.33 IOPS, 159.59 MiB/s [2024-12-06T13:15:53.531Z] 42180.75 IOPS, 164.77 MiB/s 00:21:06.515 Latency(us) 00:21:06.515 [2024-12-06T13:15:53.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.515 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:06.515 xnvme_bdev : 5.00 42734.68 166.93 0.00 0.00 1492.99 74.47 50283.99 00:21:06.515 [2024-12-06T13:15:53.531Z] =================================================================================================================== 00:21:06.515 [2024-12-06T13:15:53.531Z] Total : 42734.68 166.93 0.00 0.00 1492.99 74.47 50283.99 00:21:07.448 00:21:07.448 real 0m27.855s 00:21:07.448 user 0m15.740s 00:21:07.448 sys 0m11.639s 00:21:07.448 13:15:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.448 13:15:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 ************************************ 00:21:07.448 END TEST xnvme_bdevperf 00:21:07.448 ************************************ 00:21:07.448 13:15:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:07.448 13:15:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.448 13:15:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.448 13:15:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 ************************************ 00:21:07.448 START TEST xnvme_fio_plugin 00:21:07.448 ************************************ 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.448 13:15:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.786 { 00:21:07.786 "subsystems": [ 00:21:07.786 { 00:21:07.786 "subsystem": "bdev", 00:21:07.786 "config": [ 00:21:07.786 { 00:21:07.786 "params": { 00:21:07.786 "io_mechanism": "io_uring_cmd", 00:21:07.786 "conserve_cpu": false, 00:21:07.786 "filename": "/dev/ng0n1", 00:21:07.786 "name": "xnvme_bdev" 00:21:07.786 }, 00:21:07.786 "method": "bdev_xnvme_create" 00:21:07.786 }, 00:21:07.786 { 00:21:07.787 "method": "bdev_wait_for_examine" 00:21:07.787 } 00:21:07.787 ] 00:21:07.787 } 00:21:07.787 ] 00:21:07.787 } 00:21:07.787 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:07.787 fio-3.35 00:21:07.787 Starting 1 thread 00:21:14.343 00:21:14.343 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72871: Fri Dec 6 13:16:00 2024 00:21:14.343 read: IOPS=43.0k, BW=168MiB/s (176MB/s)(840MiB/5001msec) 00:21:14.343 slat (usec): min=3, max=873, avg= 4.79, stdev= 4.17 00:21:14.343 clat (usec): min=457, max=3415, avg=1297.34, stdev=243.72 00:21:14.343 lat (usec): min=461, max=3432, avg=1302.14, stdev=244.70 00:21:14.343 clat percentiles (usec): 00:21:14.343 | 1.00th=[ 938], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:21:14.343 | 30.00th=[ 1156], 40.00th=[ 1205], 50.00th=[ 1254], 60.00th=[ 1303], 00:21:14.343 | 70.00th=[ 1352], 80.00th=[ 1450], 90.00th=[ 1614], 95.00th=[ 1762], 00:21:14.343 | 99.00th=[ 2180], 99.50th=[ 2343], 99.90th=[ 2638], 99.95th=[ 2737], 00:21:14.343 | 99.99th=[ 3261] 00:21:14.343 bw ( KiB/s): min=150528, max=176640, per=98.36%, avg=169216.00, stdev=8012.02, samples=9 00:21:14.343 iops : min=37632, max=44160, avg=42304.00, stdev=2003.01, samples=9 00:21:14.343 lat (usec) : 500=0.01%, 750=0.02%, 1000=4.10% 00:21:14.343 lat (msec) : 2=94.11%, 4=1.77% 00:21:14.343 cpu : usr=42.84%, sys=55.58%, ctx=56, majf=0, minf=762 00:21:14.343 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:14.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.343 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:14.343 issued rwts: total=215100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:14.343 00:21:14.343 Run status group 0 (all jobs): 00:21:14.343 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=840MiB (881MB), run=5001-5001msec 00:21:14.910 ----------------------------------------------------- 00:21:14.910 Suppressions used: 00:21:14.910 count bytes template 00:21:14.910 1 11 /usr/src/fio/parse.c 00:21:14.910 1 8 libtcmalloc_minimal.so 00:21:14.910 1 904 libcrypto.so 00:21:14.910 ----------------------------------------------------- 00:21:14.910 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:14.910 13:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:15.168 { 00:21:15.168 "subsystems": [ 00:21:15.168 { 00:21:15.168 "subsystem": "bdev", 00:21:15.168 "config": [ 00:21:15.168 { 00:21:15.168 "params": { 00:21:15.168 "io_mechanism": "io_uring_cmd", 00:21:15.168 "conserve_cpu": false, 00:21:15.168 "filename": "/dev/ng0n1", 00:21:15.168 "name": "xnvme_bdev" 00:21:15.168 }, 00:21:15.168 "method": "bdev_xnvme_create" 00:21:15.168 }, 00:21:15.168 { 00:21:15.168 "method": "bdev_wait_for_examine" 00:21:15.168 } 00:21:15.168 ] 00:21:15.168 } 00:21:15.168 ] 00:21:15.168 } 00:21:15.168 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:15.168 fio-3.35 00:21:15.168 Starting 1 thread 00:21:21.726 00:21:21.726 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72963: Fri Dec 6 13:16:08 2024 00:21:21.726 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(749MiB/5001msec); 0 zone resets 00:21:21.726 slat (usec): min=2, max=173, avg= 6.02, stdev= 4.34 00:21:21.726 clat (usec): min=158, max=33406, avg=1434.57, stdev=663.78 00:21:21.726 lat (usec): min=162, max=33410, avg=1440.59, stdev=665.10 00:21:21.726 clat percentiles (usec): 00:21:21.726 | 1.00th=[ 979], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1188], 00:21:21.726 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1352], 60.00th=[ 1401], 00:21:21.726 | 70.00th=[ 1483], 80.00th=[ 1598], 90.00th=[ 1778], 95.00th=[ 1942], 00:21:21.726 | 99.00th=[ 3097], 99.50th=[ 3458], 99.90th=[ 3982], 99.95th=[ 4752], 00:21:21.726 | 99.99th=[31851] 00:21:21.726 bw ( KiB/s): min=114064, max=167504, per=99.41%, avg=152426.00, stdev=16571.36, samples=9 00:21:21.726 iops : min=28516, max=41876, avg=38106.44, stdev=4142.87, samples=9 00:21:21.726 lat (usec) : 250=0.01%, 500=0.03%, 750=0.31%, 1000=0.98% 00:21:21.726 lat (msec) : 2=94.45%, 4=4.13%, 10=0.06%, 50=0.03% 00:21:21.726 cpu : usr=47.34%, sys=51.42%, ctx=10, majf=0, minf=763 00:21:21.726 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.2%, 16=24.5%, 32=50.9%, >=64=1.6% 00:21:21.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.726 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:21.726 issued rwts: total=0,191708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.726 00:21:21.726 Run status group 0 (all jobs): 00:21:21.726 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=749MiB (785MB), run=5001-5001msec 00:21:22.661 ----------------------------------------------------- 00:21:22.661 Suppressions used: 00:21:22.661 count bytes template 00:21:22.661 1 11 /usr/src/fio/parse.c 00:21:22.661 1 8 libtcmalloc_minimal.so 00:21:22.661 1 904 libcrypto.so 00:21:22.661 ----------------------------------------------------- 00:21:22.661 00:21:22.661 00:21:22.661 real 0m15.097s 00:21:22.661 user 0m8.499s 00:21:22.661 sys 0m6.167s 00:21:22.661 13:16:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.661 13:16:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:22.661 ************************************ 00:21:22.661 END TEST xnvme_fio_plugin 00:21:22.661 ************************************ 00:21:22.661 13:16:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:22.661 13:16:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:21:22.661 13:16:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:21:22.661 13:16:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:22.661 13:16:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:22.661 13:16:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.661 13:16:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:22.661 ************************************ 00:21:22.661 START TEST xnvme_rpc 00:21:22.661 ************************************ 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73054 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73054 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73054 ']' 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.661 13:16:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.919 [2024-12-06 13:16:09.706759] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:22.919 [2024-12-06 13:16:09.706952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73054 ] 00:21:22.919 [2024-12-06 13:16:09.902475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.177 [2024-12-06 13:16:10.082587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.114 xnvme_bdev 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:24.114 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73054 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73054 ']' 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73054 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73054 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.372 killing process with pid 73054 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73054' 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73054 00:21:24.372 13:16:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73054 00:21:26.902 00:21:26.902 real 0m3.989s 00:21:26.902 user 0m4.213s 00:21:26.902 sys 0m0.576s 00:21:26.902 13:16:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.902 ************************************ 00:21:26.902 END TEST xnvme_rpc 00:21:26.902 13:16:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:26.902 ************************************ 00:21:26.902 13:16:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:26.902 13:16:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:26.902 13:16:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.902 13:16:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:26.902 ************************************ 00:21:26.902 START TEST xnvme_bdevperf 00:21:26.902 ************************************ 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:26.902 13:16:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:26.902 { 00:21:26.902 "subsystems": [ 00:21:26.902 { 00:21:26.902 "subsystem": "bdev", 00:21:26.902 "config": [ 00:21:26.902 { 00:21:26.902 "params": { 00:21:26.902 "io_mechanism": "io_uring_cmd", 00:21:26.902 "conserve_cpu": true, 00:21:26.902 "filename": "/dev/ng0n1", 00:21:26.902 "name": "xnvme_bdev" 00:21:26.902 }, 00:21:26.902 "method": "bdev_xnvme_create" 00:21:26.902 }, 00:21:26.902 { 00:21:26.902 "method": "bdev_wait_for_examine" 00:21:26.902 } 00:21:26.902 ] 00:21:26.902 } 00:21:26.902 ] 00:21:26.902 } 00:21:26.902 [2024-12-06 13:16:13.694773] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:26.902 [2024-12-06 13:16:13.694934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73134 ] 00:21:26.902 [2024-12-06 13:16:13.878532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.160 [2024-12-06 13:16:14.033661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.418 Running I/O for 5 seconds... 00:21:29.723 47104.00 IOPS, 184.00 MiB/s [2024-12-06T13:16:17.673Z] 47456.00 IOPS, 185.38 MiB/s [2024-12-06T13:16:18.607Z] 47104.00 IOPS, 184.00 MiB/s [2024-12-06T13:16:19.540Z] 47296.00 IOPS, 184.75 MiB/s [2024-12-06T13:16:19.540Z] 47923.20 IOPS, 187.20 MiB/s 00:21:32.524 Latency(us) 00:21:32.524 [2024-12-06T13:16:19.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.524 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:32.525 xnvme_bdev : 5.00 47909.89 187.15 0.00 0.00 1331.62 785.69 5898.24 00:21:32.525 [2024-12-06T13:16:19.541Z] =================================================================================================================== 00:21:32.525 [2024-12-06T13:16:19.541Z] Total : 47909.89 187.15 0.00 0.00 1331.62 785.69 5898.24 00:21:33.515 13:16:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:33.515 13:16:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:33.515 13:16:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:33.515 13:16:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:33.515 13:16:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:33.515 { 00:21:33.515 "subsystems": [ 00:21:33.515 { 00:21:33.515 "subsystem": "bdev", 00:21:33.515 "config": [ 00:21:33.515 { 00:21:33.515 "params": { 00:21:33.515 "io_mechanism": "io_uring_cmd", 00:21:33.515 "conserve_cpu": true, 00:21:33.515 "filename": "/dev/ng0n1", 00:21:33.515 "name": "xnvme_bdev" 00:21:33.515 }, 00:21:33.515 "method": "bdev_xnvme_create" 00:21:33.515 }, 00:21:33.515 { 00:21:33.515 "method": "bdev_wait_for_examine" 00:21:33.515 } 00:21:33.515 ] 00:21:33.515 } 00:21:33.515 ] 00:21:33.515 } 00:21:33.772 [2024-12-06 13:16:20.555461] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:33.772 [2024-12-06 13:16:20.555634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 00:21:33.772 [2024-12-06 13:16:20.740754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.032 [2024-12-06 13:16:20.869481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.290 Running I/O for 5 seconds... 00:21:36.595 47648.00 IOPS, 186.12 MiB/s [2024-12-06T13:16:24.605Z] 45584.00 IOPS, 178.06 MiB/s [2024-12-06T13:16:25.539Z] 45237.33 IOPS, 176.71 MiB/s [2024-12-06T13:16:26.473Z] 45656.00 IOPS, 178.34 MiB/s 00:21:39.457 Latency(us) 00:21:39.457 [2024-12-06T13:16:26.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.457 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:39.457 xnvme_bdev : 5.00 46453.17 181.46 0.00 0.00 1372.88 815.48 4259.84 00:21:39.457 [2024-12-06T13:16:26.473Z] =================================================================================================================== 00:21:39.457 [2024-12-06T13:16:26.473Z] Total : 46453.17 181.46 0.00 0.00 1372.88 815.48 4259.84 00:21:40.391 13:16:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:40.391 13:16:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:21:40.391 13:16:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:40.391 13:16:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:40.391 13:16:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:40.391 { 00:21:40.391 "subsystems": [ 00:21:40.391 { 00:21:40.391 "subsystem": "bdev", 00:21:40.391 "config": [ 00:21:40.391 { 00:21:40.391 "params": { 00:21:40.391 "io_mechanism": "io_uring_cmd", 00:21:40.391 "conserve_cpu": true, 00:21:40.391 "filename": "/dev/ng0n1", 00:21:40.391 "name": "xnvme_bdev" 00:21:40.391 }, 00:21:40.391 "method": "bdev_xnvme_create" 00:21:40.391 }, 00:21:40.391 { 00:21:40.391 "method": "bdev_wait_for_examine" 00:21:40.391 } 00:21:40.391 ] 00:21:40.391 } 00:21:40.391 ] 00:21:40.391 } 00:21:40.649 [2024-12-06 13:16:27.425496] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:40.649 [2024-12-06 13:16:27.425685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73290 ] 00:21:40.649 [2024-12-06 13:16:27.605667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.906 [2024-12-06 13:16:27.740042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.165 Running I/O for 5 seconds... 00:21:43.516 66752.00 IOPS, 260.75 MiB/s [2024-12-06T13:16:31.466Z] 70016.00 IOPS, 273.50 MiB/s [2024-12-06T13:16:32.401Z] 71552.00 IOPS, 279.50 MiB/s [2024-12-06T13:16:33.335Z] 70128.00 IOPS, 273.94 MiB/s [2024-12-06T13:16:33.335Z] 70860.80 IOPS, 276.80 MiB/s 00:21:46.319 Latency(us) 00:21:46.319 [2024-12-06T13:16:33.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.319 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:21:46.319 xnvme_bdev : 5.00 70834.73 276.70 0.00 0.00 899.52 472.90 3127.85 00:21:46.319 [2024-12-06T13:16:33.335Z] =================================================================================================================== 00:21:46.319 [2024-12-06T13:16:33.335Z] Total : 70834.73 276.70 0.00 0.00 899.52 472.90 3127.85 00:21:47.255 13:16:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:47.255 13:16:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:47.255 13:16:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:47.255 13:16:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:47.255 13:16:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:47.255 { 00:21:47.255 "subsystems": [ 00:21:47.255 { 00:21:47.255 "subsystem": "bdev", 00:21:47.255 "config": [ 00:21:47.255 { 00:21:47.255 "params": { 00:21:47.255 "io_mechanism": "io_uring_cmd", 00:21:47.255 "conserve_cpu": true, 00:21:47.255 "filename": "/dev/ng0n1", 00:21:47.255 "name": "xnvme_bdev" 00:21:47.255 }, 00:21:47.255 "method": "bdev_xnvme_create" 00:21:47.255 }, 00:21:47.255 { 00:21:47.255 "method": "bdev_wait_for_examine" 00:21:47.255 } 00:21:47.255 ] 00:21:47.255 } 00:21:47.255 ] 00:21:47.255 } 00:21:47.514 [2024-12-06 13:16:34.273426] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:47.514 [2024-12-06 13:16:34.273625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73366 ] 00:21:47.514 [2024-12-06 13:16:34.468511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.773 [2024-12-06 13:16:34.629037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.033 Running I/O for 5 seconds... 00:21:50.337 31156.00 IOPS, 121.70 MiB/s [2024-12-06T13:16:38.286Z] 32995.00 IOPS, 128.89 MiB/s [2024-12-06T13:16:39.217Z] 35228.00 IOPS, 137.61 MiB/s [2024-12-06T13:16:40.149Z] 36080.50 IOPS, 140.94 MiB/s [2024-12-06T13:16:40.149Z] 36399.80 IOPS, 142.19 MiB/s 00:21:53.133 Latency(us) 00:21:53.133 [2024-12-06T13:16:40.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.133 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:53.133 xnvme_bdev : 5.01 36356.63 142.02 0.00 0.00 1752.14 136.84 16443.58 00:21:53.133 [2024-12-06T13:16:40.149Z] =================================================================================================================== 00:21:53.133 [2024-12-06T13:16:40.149Z] Total : 36356.63 142.02 0.00 0.00 1752.14 136.84 16443.58 00:21:54.168 00:21:54.168 real 0m27.551s 00:21:54.168 user 0m18.709s 00:21:54.168 sys 0m6.653s 00:21:54.168 13:16:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.168 ************************************ 00:21:54.168 13:16:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:54.168 END TEST xnvme_bdevperf 00:21:54.168 ************************************ 00:21:54.426 13:16:41 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:54.426 13:16:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:54.426 13:16:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.426 13:16:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:54.426 ************************************ 00:21:54.426 START TEST xnvme_fio_plugin 00:21:54.426 ************************************ 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:54.426 13:16:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:54.426 { 00:21:54.426 "subsystems": [ 00:21:54.426 { 00:21:54.426 "subsystem": "bdev", 00:21:54.426 "config": [ 00:21:54.426 { 00:21:54.426 "params": { 00:21:54.426 "io_mechanism": "io_uring_cmd", 00:21:54.426 "conserve_cpu": true, 00:21:54.426 "filename": "/dev/ng0n1", 00:21:54.426 "name": "xnvme_bdev" 00:21:54.426 }, 00:21:54.426 "method": "bdev_xnvme_create" 00:21:54.426 }, 00:21:54.426 { 00:21:54.426 "method": "bdev_wait_for_examine" 00:21:54.426 } 00:21:54.426 ] 00:21:54.426 } 00:21:54.426 ] 00:21:54.426 } 00:21:54.685 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:54.685 fio-3.35 00:21:54.685 Starting 1 thread 00:22:01.294 00:22:01.294 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73490: Fri Dec 6 13:16:47 2024 00:22:01.294 read: IOPS=46.3k, BW=181MiB/s (190MB/s)(904MiB/5001msec) 00:22:01.294 slat (usec): min=2, max=480, avg= 4.55, stdev= 3.31 00:22:01.294 clat (usec): min=467, max=5898, avg=1203.14, stdev=276.21 00:22:01.294 lat (usec): min=473, max=5927, avg=1207.69, stdev=277.67 00:22:01.294 clat percentiles (usec): 00:22:01.294 | 1.00th=[ 873], 5.00th=[ 947], 10.00th=[ 979], 20.00th=[ 1029], 00:22:01.294 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1156], 60.00th=[ 1188], 00:22:01.294 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1450], 95.00th=[ 1614], 00:22:01.294 | 99.00th=[ 2212], 99.50th=[ 2769], 99.90th=[ 3982], 99.95th=[ 4555], 00:22:01.294 | 99.99th=[ 5538] 00:22:01.294 bw ( KiB/s): min=168952, max=199936, per=100.00%, avg=185786.89, stdev=10635.23, samples=9 00:22:01.294 iops : min=42238, max=49984, avg=46446.67, stdev=2658.85, samples=9 00:22:01.294 lat (usec) : 500=0.01%, 750=0.02%, 1000=13.13% 00:22:01.294 lat (msec) : 2=85.22%, 4=1.54%, 10=0.10% 00:22:01.294 cpu : usr=70.06%, sys=26.48%, ctx=30, majf=0, minf=762 00:22:01.294 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:01.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.294 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:01.294 issued rwts: total=231455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.294 00:22:01.294 Run status group 0 (all jobs): 00:22:01.294 READ: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=904MiB (948MB), run=5001-5001msec 00:22:01.866 ----------------------------------------------------- 00:22:01.866 Suppressions used: 00:22:01.866 count bytes template 00:22:01.866 1 11 /usr/src/fio/parse.c 00:22:01.866 1 8 libtcmalloc_minimal.so 00:22:01.866 1 904 libcrypto.so 00:22:01.866 ----------------------------------------------------- 00:22:01.866 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:01.866 13:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:01.866 { 00:22:01.866 "subsystems": [ 00:22:01.866 { 00:22:01.866 "subsystem": "bdev", 00:22:01.866 "config": [ 00:22:01.866 { 00:22:01.866 "params": { 00:22:01.867 "io_mechanism": "io_uring_cmd", 00:22:01.867 "conserve_cpu": true, 00:22:01.867 "filename": "/dev/ng0n1", 00:22:01.867 "name": "xnvme_bdev" 00:22:01.867 }, 00:22:01.867 "method": "bdev_xnvme_create" 00:22:01.867 }, 00:22:01.867 { 00:22:01.867 "method": "bdev_wait_for_examine" 00:22:01.867 } 00:22:01.867 ] 00:22:01.867 } 00:22:01.867 ] 00:22:01.867 } 00:22:02.125 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:02.125 fio-3.35 00:22:02.125 Starting 1 thread 00:22:08.682 00:22:08.682 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73581: Fri Dec 6 13:16:54 2024 00:22:08.682 write: IOPS=40.6k, BW=159MiB/s (166MB/s)(794MiB/5001msec); 0 zone resets 00:22:08.682 slat (usec): min=2, max=101, avg= 5.50, stdev= 3.20 00:22:08.682 clat (usec): min=401, max=4544, avg=1358.27, stdev=235.48 00:22:08.682 lat (usec): min=408, max=4553, avg=1363.77, stdev=236.61 00:22:08.682 clat percentiles (usec): 00:22:08.682 | 1.00th=[ 1004], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1172], 00:22:08.682 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1319], 60.00th=[ 1369], 00:22:08.682 | 70.00th=[ 1434], 80.00th=[ 1532], 90.00th=[ 1663], 95.00th=[ 1778], 00:22:08.682 | 99.00th=[ 2089], 99.50th=[ 2245], 99.90th=[ 2638], 99.95th=[ 2868], 00:22:08.682 | 99.99th=[ 4424] 00:22:08.682 bw ( KiB/s): min=142848, max=176640, per=100.00%, avg=163271.11, stdev=10093.17, samples=9 00:22:08.682 iops : min=35712, max=44160, avg=40817.78, stdev=2523.29, samples=9 00:22:08.682 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.00% 00:22:08.682 lat (msec) : 2=97.55%, 4=1.41%, 10=0.03% 00:22:08.682 cpu : usr=68.76%, sys=27.62%, ctx=10, majf=0, minf=763 00:22:08.682 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:08.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.682 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:08.682 issued rwts: total=0,203156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:08.682 00:22:08.682 Run status group 0 (all jobs): 00:22:08.682 WRITE: bw=159MiB/s (166MB/s), 159MiB/s-159MiB/s (166MB/s-166MB/s), io=794MiB (832MB), run=5001-5001msec 00:22:09.246 ----------------------------------------------------- 00:22:09.246 Suppressions used: 00:22:09.246 count bytes template 00:22:09.246 1 11 /usr/src/fio/parse.c 00:22:09.246 1 8 libtcmalloc_minimal.so 00:22:09.246 1 904 libcrypto.so 00:22:09.246 ----------------------------------------------------- 00:22:09.246 00:22:09.246 00:22:09.246 real 0m14.946s 00:22:09.246 user 0m10.807s 00:22:09.246 sys 0m3.492s 00:22:09.246 13:16:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.246 13:16:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:09.246 ************************************ 00:22:09.246 END TEST xnvme_fio_plugin 00:22:09.246 ************************************ 00:22:09.246 13:16:56 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73054 00:22:09.246 13:16:56 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73054 ']' 00:22:09.246 13:16:56 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73054 00:22:09.246 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73054) - No such process 00:22:09.246 Process with pid 73054 is not found 00:22:09.246 13:16:56 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73054 is not found' 00:22:09.246 00:22:09.246 real 3m50.696s 00:22:09.246 user 2m12.727s 00:22:09.246 sys 1m21.863s 00:22:09.246 13:16:56 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.246 13:16:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:09.246 ************************************ 00:22:09.246 END TEST nvme_xnvme 00:22:09.246 ************************************ 00:22:09.246 13:16:56 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:09.246 13:16:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:09.246 13:16:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.246 13:16:56 -- common/autotest_common.sh@10 -- # set +x 00:22:09.246 ************************************ 00:22:09.246 START TEST blockdev_xnvme 00:22:09.246 ************************************ 00:22:09.246 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:09.504 * Looking for test storage... 00:22:09.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.504 13:16:56 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.504 --rc genhtml_branch_coverage=1 00:22:09.504 --rc genhtml_function_coverage=1 00:22:09.504 --rc genhtml_legend=1 00:22:09.504 --rc geninfo_all_blocks=1 00:22:09.504 --rc geninfo_unexecuted_blocks=1 00:22:09.504 00:22:09.504 ' 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.504 --rc genhtml_branch_coverage=1 00:22:09.504 --rc genhtml_function_coverage=1 00:22:09.504 --rc genhtml_legend=1 00:22:09.504 --rc geninfo_all_blocks=1 00:22:09.504 --rc geninfo_unexecuted_blocks=1 00:22:09.504 00:22:09.504 ' 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.504 --rc genhtml_branch_coverage=1 00:22:09.504 --rc genhtml_function_coverage=1 00:22:09.504 --rc genhtml_legend=1 00:22:09.504 --rc geninfo_all_blocks=1 00:22:09.504 --rc geninfo_unexecuted_blocks=1 00:22:09.504 00:22:09.504 ' 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.504 --rc genhtml_branch_coverage=1 00:22:09.504 --rc genhtml_function_coverage=1 00:22:09.504 --rc genhtml_legend=1 00:22:09.504 --rc geninfo_all_blocks=1 00:22:09.504 --rc geninfo_unexecuted_blocks=1 00:22:09.504 00:22:09.504 ' 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73723 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:09.504 13:16:56 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73723 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73723 ']' 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.504 13:16:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:09.761 [2024-12-06 13:16:56.560639] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:09.761 [2024-12-06 13:16:56.560808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73723 ] 00:22:09.761 [2024-12-06 13:16:56.746769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.019 [2024-12-06 13:16:56.905195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.034 13:16:57 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.034 13:16:57 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:22:11.034 13:16:57 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:22:11.034 13:16:57 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:22:11.034 13:16:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:22:11.034 13:16:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:22:11.034 13:16:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:11.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:11.859 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:11.859 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:12.118 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:22:12.118 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0c0n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n2 ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n3 ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring -c' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:22:12.118 nvme0n1 00:22:12.118 nvme1n1 00:22:12.118 nvme1n2 00:22:12.118 nvme1n3 00:22:12.118 nvme2n1 00:22:12.118 nvme3n1 00:22:12.118 13:16:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.118 13:16:58 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.118 13:16:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:22:12.118 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:22:12.377 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "73f85ef4-59da-4a2f-ba68-79fbe8423e81"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "73f85ef4-59da-4a2f-ba68-79fbe8423e81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1db03f47-d645-4837-8216-f0c63849989e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1db03f47-d645-4837-8216-f0c63849989e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "a4fa627d-ff18-46b3-9b0b-c02d6a696952"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4fa627d-ff18-46b3-9b0b-c02d6a696952",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "ce5fc5be-1b9b-4624-a57e-0d9ea9cf22bf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce5fc5be-1b9b-4624-a57e-0d9ea9cf22bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "304912cf-abc3-43ff-a220-e64b716415df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "304912cf-abc3-43ff-a220-e64b716415df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "95909caa-b42e-4dd1-a8dd-c3c37dedb3aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "95909caa-b42e-4dd1-a8dd-c3c37dedb3aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:12.377 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:22:12.377 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:22:12.377 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:22:12.377 13:16:59 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73723 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73723 ']' 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73723 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73723 00:22:12.377 killing process with pid 73723 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73723' 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73723 00:22:12.377 13:16:59 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73723 00:22:14.903 13:17:01 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:14.903 13:17:01 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:14.903 13:17:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:14.903 13:17:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.903 13:17:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:14.903 ************************************ 00:22:14.903 START TEST bdev_hello_world 00:22:14.903 ************************************ 00:22:14.903 13:17:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:14.903 [2024-12-06 13:17:01.604619] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:14.903 [2024-12-06 13:17:01.604784] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74014 ] 00:22:14.903 [2024-12-06 13:17:01.784208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.161 [2024-12-06 13:17:01.919579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.419 [2024-12-06 13:17:02.370319] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:15.419 [2024-12-06 13:17:02.370395] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:22:15.419 [2024-12-06 13:17:02.370429] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:15.419 [2024-12-06 13:17:02.373027] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:15.419 [2024-12-06 13:17:02.373556] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:15.419 [2024-12-06 13:17:02.373601] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:15.419 [2024-12-06 13:17:02.373853] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:15.419 00:22:15.419 [2024-12-06 13:17:02.373895] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:16.793 00:22:16.793 real 0m1.924s 00:22:16.793 user 0m1.528s 00:22:16.793 sys 0m0.276s 00:22:16.793 13:17:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.793 13:17:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:16.794 ************************************ 00:22:16.794 END TEST bdev_hello_world 00:22:16.794 ************************************ 00:22:16.794 13:17:03 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:22:16.794 13:17:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.794 13:17:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.794 13:17:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:16.794 ************************************ 00:22:16.794 START TEST bdev_bounds 00:22:16.794 ************************************ 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:16.794 Process bdevio pid: 74056 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74056 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74056' 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74056 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74056 ']' 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.794 13:17:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:16.794 [2024-12-06 13:17:03.575148] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:16.794 [2024-12-06 13:17:03.575304] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74056 ] 00:22:16.794 [2024-12-06 13:17:03.762924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.052 [2024-12-06 13:17:03.917695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.052 [2024-12-06 13:17:03.917819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.052 [2024-12-06 13:17:03.917820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.618 13:17:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.618 13:17:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:17.618 13:17:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:17.876 I/O targets: 00:22:17.876 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:22:17.876 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:17.876 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:17.876 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:17.876 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:22:17.876 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:22:17.876 00:22:17.876 00:22:17.876 CUnit - A unit testing framework for C - Version 2.1-3 00:22:17.876 http://cunit.sourceforge.net/ 00:22:17.876 00:22:17.876 00:22:17.876 Suite: bdevio tests on: nvme3n1 00:22:17.876 Test: blockdev write read block ...passed 00:22:17.876 Test: blockdev write zeroes read block ...passed 00:22:17.876 Test: blockdev write zeroes read no split ...passed 00:22:17.876 Test: blockdev write zeroes read split ...passed 00:22:17.876 Test: blockdev write zeroes read split partial ...passed 00:22:17.876 Test: blockdev reset ...passed 00:22:17.876 Test: blockdev write read 8 blocks ...passed 00:22:17.876 Test: blockdev write read size > 128k ...passed 00:22:17.876 Test: blockdev write read invalid size ...passed 00:22:17.876 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:17.876 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:17.876 Test: blockdev write read max offset ...passed 00:22:17.876 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:17.876 Test: blockdev writev readv 8 blocks ...passed 00:22:17.876 Test: blockdev writev readv 30 x 1block ...passed 00:22:17.876 Test: blockdev writev readv block ...passed 00:22:17.876 Test: blockdev writev readv size > 128k ...passed 00:22:17.876 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:17.876 Test: blockdev comparev and writev ...passed 00:22:17.876 Test: blockdev nvme passthru rw ...passed 00:22:17.876 Test: blockdev nvme passthru vendor specific ...passed 00:22:17.876 Test: blockdev nvme admin passthru ...passed 00:22:17.876 Test: blockdev copy ...passed 00:22:17.876 Suite: bdevio tests on: nvme2n1 00:22:17.876 Test: blockdev write read block ...passed 00:22:17.876 Test: blockdev write zeroes read block ...passed 00:22:17.876 Test: blockdev write zeroes read no split ...passed 00:22:17.876 Test: blockdev write zeroes read split ...passed 00:22:17.876 Test: blockdev write zeroes read split partial ...passed 00:22:17.876 Test: blockdev reset ...passed 00:22:17.877 Test: blockdev write read 8 blocks ...passed 00:22:17.877 Test: blockdev write read size > 128k ...passed 00:22:17.877 Test: blockdev write read invalid size ...passed 00:22:18.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.135 Test: blockdev write read max offset ...passed 00:22:18.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.135 Test: blockdev writev readv 8 blocks ...passed 00:22:18.135 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.135 Test: blockdev writev readv block ...passed 00:22:18.135 Test: blockdev writev readv size > 128k ...passed 00:22:18.135 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.135 Test: blockdev comparev and writev ...passed 00:22:18.135 Test: blockdev nvme passthru rw ...passed 00:22:18.135 Test: blockdev nvme passthru vendor specific ...passed 00:22:18.135 Test: blockdev nvme admin passthru ...passed 00:22:18.135 Test: blockdev copy ...passed 00:22:18.135 Suite: bdevio tests on: nvme1n3 00:22:18.135 Test: blockdev write read block ...passed 00:22:18.135 Test: blockdev write zeroes read block ...passed 00:22:18.135 Test: blockdev write zeroes read no split ...passed 00:22:18.135 Test: blockdev write zeroes read split ...passed 00:22:18.135 Test: blockdev write zeroes read split partial ...passed 00:22:18.135 Test: blockdev reset ...passed 00:22:18.135 Test: blockdev write read 8 blocks ...passed 00:22:18.135 Test: blockdev write read size > 128k ...passed 00:22:18.135 Test: blockdev write read invalid size ...passed 00:22:18.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.135 Test: blockdev write read max offset ...passed 00:22:18.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.135 Test: blockdev writev readv 8 blocks ...passed 00:22:18.135 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.135 Test: blockdev writev readv block ...passed 00:22:18.135 Test: blockdev writev readv size > 128k ...passed 00:22:18.135 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.135 Test: blockdev comparev and writev ...passed 00:22:18.135 Test: blockdev nvme passthru rw ...passed 00:22:18.135 Test: blockdev nvme passthru vendor specific ...passed 00:22:18.135 Test: blockdev nvme admin passthru ...passed 00:22:18.135 Test: blockdev copy ...passed 00:22:18.135 Suite: bdevio tests on: nvme1n2 00:22:18.135 Test: blockdev write read block ...passed 00:22:18.135 Test: blockdev write zeroes read block ...passed 00:22:18.135 Test: blockdev write zeroes read no split ...passed 00:22:18.135 Test: blockdev write zeroes read split ...passed 00:22:18.135 Test: blockdev write zeroes read split partial ...passed 00:22:18.135 Test: blockdev reset ...passed 00:22:18.135 Test: blockdev write read 8 blocks ...passed 00:22:18.135 Test: blockdev write read size > 128k ...passed 00:22:18.135 Test: blockdev write read invalid size ...passed 00:22:18.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.135 Test: blockdev write read max offset ...passed 00:22:18.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.135 Test: blockdev writev readv 8 blocks ...passed 00:22:18.135 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.135 Test: blockdev writev readv block ...passed 00:22:18.135 Test: blockdev writev readv size > 128k ...passed 00:22:18.135 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.135 Test: blockdev comparev and writev ...passed 00:22:18.135 Test: blockdev nvme passthru rw ...passed 00:22:18.135 Test: blockdev nvme passthru vendor specific ...passed 00:22:18.135 Test: blockdev nvme admin passthru ...passed 00:22:18.135 Test: blockdev copy ...passed 00:22:18.135 Suite: bdevio tests on: nvme1n1 00:22:18.135 Test: blockdev write read block ...passed 00:22:18.135 Test: blockdev write zeroes read block ...passed 00:22:18.135 Test: blockdev write zeroes read no split ...passed 00:22:18.135 Test: blockdev write zeroes read split ...passed 00:22:18.392 Test: blockdev write zeroes read split partial ...passed 00:22:18.392 Test: blockdev reset ...passed 00:22:18.392 Test: blockdev write read 8 blocks ...passed 00:22:18.392 Test: blockdev write read size > 128k ...passed 00:22:18.392 Test: blockdev write read invalid size ...passed 00:22:18.392 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.392 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.392 Test: blockdev write read max offset ...passed 00:22:18.392 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.392 Test: blockdev writev readv 8 blocks ...passed 00:22:18.392 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.392 Test: blockdev writev readv block ...passed 00:22:18.392 Test: blockdev writev readv size > 128k ...passed 00:22:18.392 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.392 Test: blockdev comparev and writev ...passed 00:22:18.392 Test: blockdev nvme passthru rw ...passed 00:22:18.392 Test: blockdev nvme passthru vendor specific ...passed 00:22:18.392 Test: blockdev nvme admin passthru ...passed 00:22:18.392 Test: blockdev copy ...passed 00:22:18.392 Suite: bdevio tests on: nvme0n1 00:22:18.392 Test: blockdev write read block ...passed 00:22:18.392 Test: blockdev write zeroes read block ...passed 00:22:18.392 Test: blockdev write zeroes read no split ...passed 00:22:18.392 Test: blockdev write zeroes read split ...passed 00:22:18.392 Test: blockdev write zeroes read split partial ...passed 00:22:18.392 Test: blockdev reset ...passed 00:22:18.392 Test: blockdev write read 8 blocks ...passed 00:22:18.392 Test: blockdev write read size > 128k ...passed 00:22:18.392 Test: blockdev write read invalid size ...passed 00:22:18.392 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.392 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.392 Test: blockdev write read max offset ...passed 00:22:18.392 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.392 Test: blockdev writev readv 8 blocks ...passed 00:22:18.392 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.392 Test: blockdev writev readv block ...passed 00:22:18.392 Test: blockdev writev readv size > 128k ...passed 00:22:18.392 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.392 Test: blockdev comparev and writev ...passed 00:22:18.392 Test: blockdev nvme passthru rw ...passed 00:22:18.392 Test: blockdev nvme passthru vendor specific ...passed 00:22:18.392 Test: blockdev nvme admin passthru ...passed 00:22:18.392 Test: blockdev copy ...passed 00:22:18.392 00:22:18.392 Run Summary: Type Total Ran Passed Failed Inactive 00:22:18.392 suites 6 6 n/a 0 0 00:22:18.392 tests 138 138 138 0 0 00:22:18.392 asserts 780 780 780 0 n/a 00:22:18.392 00:22:18.392 Elapsed time = 1.512 seconds 00:22:18.392 0 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74056 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74056 ']' 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74056 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74056 00:22:18.392 killing process with pid 74056 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74056' 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74056 00:22:18.392 13:17:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74056 00:22:19.764 13:17:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:19.764 00:22:19.764 real 0m2.964s 00:22:19.764 user 0m7.420s 00:22:19.764 sys 0m0.454s 00:22:19.764 13:17:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.764 13:17:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:19.764 ************************************ 00:22:19.764 END TEST bdev_bounds 00:22:19.764 ************************************ 00:22:19.764 13:17:06 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:22:19.764 13:17:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:19.764 13:17:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.764 13:17:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:19.764 ************************************ 00:22:19.764 START TEST bdev_nbd 00:22:19.764 ************************************ 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74111 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:19.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74111 /var/tmp/spdk-nbd.sock 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74111 ']' 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.764 13:17:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:19.764 [2024-12-06 13:17:06.605691] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:19.764 [2024-12-06 13:17:06.605989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.023 [2024-12-06 13:17:06.806929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.023 [2024-12-06 13:17:06.982738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:20.956 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:20.957 13:17:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:22:21.214 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:21.215 1+0 records in 00:22:21.215 1+0 records out 00:22:21.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566842 s, 7.2 MB/s 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:21.215 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:21.473 1+0 records in 00:22:21.473 1+0 records out 00:22:21.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574065 s, 7.1 MB/s 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:21.473 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:21.732 1+0 records in 00:22:21.732 1+0 records out 00:22:21.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535578 s, 7.6 MB/s 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:21.732 13:17:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:22.297 1+0 records in 00:22:22.297 1+0 records out 00:22:22.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00410432 s, 998 kB/s 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:22.297 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:22.556 1+0 records in 00:22:22.556 1+0 records out 00:22:22.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000754441 s, 5.4 MB/s 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:22.556 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:22.815 1+0 records in 00:22:22.815 1+0 records out 00:22:22.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752999 s, 5.4 MB/s 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:22.815 13:17:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:23.073 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd0", 00:22:23.073 "bdev_name": "nvme0n1" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd1", 00:22:23.073 "bdev_name": "nvme1n1" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd2", 00:22:23.073 "bdev_name": "nvme1n2" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd3", 00:22:23.073 "bdev_name": "nvme1n3" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd4", 00:22:23.073 "bdev_name": "nvme2n1" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd5", 00:22:23.073 "bdev_name": "nvme3n1" 00:22:23.073 } 00:22:23.073 ]' 00:22:23.073 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:23.073 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:23.073 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd0", 00:22:23.073 "bdev_name": "nvme0n1" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd1", 00:22:23.073 "bdev_name": "nvme1n1" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd2", 00:22:23.073 "bdev_name": "nvme1n2" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd3", 00:22:23.073 "bdev_name": "nvme1n3" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd4", 00:22:23.073 "bdev_name": "nvme2n1" 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "nbd_device": "/dev/nbd5", 00:22:23.073 "bdev_name": "nvme3n1" 00:22:23.073 } 00:22:23.073 ]' 00:22:23.331 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:22:23.331 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:23.331 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:22:23.331 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:23.331 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:23.331 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:23.331 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:23.588 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:23.846 13:17:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.413 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.980 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.981 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:22:24.981 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:22:24.981 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:22:24.981 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:22:24.981 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.981 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.981 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:22:25.239 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:25.239 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.239 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:25.239 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:25.239 13:17:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:25.498 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:22:25.756 /dev/nbd0 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:25.756 1+0 records in 00:22:25.756 1+0 records out 00:22:25.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520597 s, 7.9 MB/s 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:25.756 13:17:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:22:26.324 /dev/nbd1 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:26.324 1+0 records in 00:22:26.324 1+0 records out 00:22:26.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544007 s, 7.5 MB/s 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:26.324 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:22:26.616 /dev/nbd10 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:26.616 1+0 records in 00:22:26.616 1+0 records out 00:22:26.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590528 s, 6.9 MB/s 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:26.616 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:22:26.873 /dev/nbd11 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:26.873 1+0 records in 00:22:26.873 1+0 records out 00:22:26.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662831 s, 6.2 MB/s 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:26.873 13:17:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:22:27.131 /dev/nbd12 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:27.131 1+0 records in 00:22:27.131 1+0 records out 00:22:27.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651262 s, 6.3 MB/s 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:27.131 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:22:27.697 /dev/nbd13 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:27.697 1+0 records in 00:22:27.697 1+0 records out 00:22:27.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786801 s, 5.2 MB/s 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:27.697 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd0", 00:22:27.994 "bdev_name": "nvme0n1" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd1", 00:22:27.994 "bdev_name": "nvme1n1" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd10", 00:22:27.994 "bdev_name": "nvme1n2" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd11", 00:22:27.994 "bdev_name": "nvme1n3" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd12", 00:22:27.994 "bdev_name": "nvme2n1" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd13", 00:22:27.994 "bdev_name": "nvme3n1" 00:22:27.994 } 00:22:27.994 ]' 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd0", 00:22:27.994 "bdev_name": "nvme0n1" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd1", 00:22:27.994 "bdev_name": "nvme1n1" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd10", 00:22:27.994 "bdev_name": "nvme1n2" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd11", 00:22:27.994 "bdev_name": "nvme1n3" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd12", 00:22:27.994 "bdev_name": "nvme2n1" 00:22:27.994 }, 00:22:27.994 { 00:22:27.994 "nbd_device": "/dev/nbd13", 00:22:27.994 "bdev_name": "nvme3n1" 00:22:27.994 } 00:22:27.994 ]' 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:27.994 /dev/nbd1 00:22:27.994 /dev/nbd10 00:22:27.994 /dev/nbd11 00:22:27.994 /dev/nbd12 00:22:27.994 /dev/nbd13' 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:27.994 /dev/nbd1 00:22:27.994 /dev/nbd10 00:22:27.994 /dev/nbd11 00:22:27.994 /dev/nbd12 00:22:27.994 /dev/nbd13' 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:27.994 256+0 records in 00:22:27.994 256+0 records out 00:22:27.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073722 s, 142 MB/s 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:27.994 13:17:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:28.268 256+0 records in 00:22:28.268 256+0 records out 00:22:28.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127275 s, 8.2 MB/s 00:22:28.268 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:28.268 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:28.268 256+0 records in 00:22:28.268 256+0 records out 00:22:28.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128303 s, 8.2 MB/s 00:22:28.268 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:28.268 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:22:28.533 256+0 records in 00:22:28.533 256+0 records out 00:22:28.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123899 s, 8.5 MB/s 00:22:28.533 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:28.533 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:22:28.533 256+0 records in 00:22:28.533 256+0 records out 00:22:28.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132955 s, 7.9 MB/s 00:22:28.533 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:28.533 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:22:28.790 256+0 records in 00:22:28.790 256+0 records out 00:22:28.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127367 s, 8.2 MB/s 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:22:28.790 256+0 records in 00:22:28.790 256+0 records out 00:22:28.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142112 s, 7.4 MB/s 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:28.790 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:28.791 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:28.791 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:28.791 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:28.791 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:28.791 13:17:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.354 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.610 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.868 13:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.125 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:30.383 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.640 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.641 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:30.901 13:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:31.159 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:31.417 malloc_lvol_verify 00:22:31.417 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:31.982 4b2b3410-5d69-4ad1-84e6-644fae94f38a 00:22:31.982 13:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:32.240 9960b4ff-b806-48a9-9831-05fbc075aa8b 00:22:32.240 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:32.498 /dev/nbd0 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:32.498 mke2fs 1.47.0 (5-Feb-2023) 00:22:32.498 Discarding device blocks: 0/4096 done 00:22:32.498 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:32.498 00:22:32.498 Allocating group tables: 0/1 done 00:22:32.498 Writing inode tables: 0/1 done 00:22:32.498 Creating journal (1024 blocks): done 00:22:32.498 Writing superblocks and filesystem accounting information: 0/1 done 00:22:32.498 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:32.498 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74111 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74111 ']' 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74111 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74111 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.755 killing process with pid 74111 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74111' 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74111 00:22:32.755 13:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74111 00:22:34.128 13:17:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:34.128 00:22:34.128 real 0m14.401s 00:22:34.128 user 0m20.801s 00:22:34.128 sys 0m4.722s 00:22:34.128 13:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.128 13:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:34.128 ************************************ 00:22:34.128 END TEST bdev_nbd 00:22:34.128 ************************************ 00:22:34.128 13:17:20 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:22:34.128 13:17:20 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:22:34.128 13:17:20 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:22:34.128 13:17:20 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:22:34.128 13:17:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.128 13:17:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.128 13:17:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:34.128 ************************************ 00:22:34.128 START TEST bdev_fio 00:22:34.128 ************************************ 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:34.128 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:34.128 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n2]' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n2 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n3]' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n3 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.129 13:17:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:34.129 ************************************ 00:22:34.129 START TEST bdev_fio_rw_verify 00:22:34.129 ************************************ 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:34.129 13:17:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:34.387 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:34.387 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:34.387 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:34.387 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:34.387 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:34.387 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:34.387 fio-3.35 00:22:34.387 Starting 6 threads 00:22:46.667 00:22:46.667 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74558: Fri Dec 6 13:17:32 2024 00:22:46.667 read: IOPS=26.6k, BW=104MiB/s (109MB/s)(1038MiB/10004msec) 00:22:46.667 slat (usec): min=3, max=1083, avg= 7.71, stdev= 5.47 00:22:46.667 clat (usec): min=103, max=7521, avg=693.06, stdev=251.47 00:22:46.667 lat (usec): min=113, max=7531, avg=700.77, stdev=252.21 00:22:46.667 clat percentiles (usec): 00:22:46.667 | 50.000th=[ 701], 99.000th=[ 1287], 99.900th=[ 1795], 99.990th=[ 3687], 00:22:46.667 | 99.999th=[ 4080] 00:22:46.667 write: IOPS=27.0k, BW=105MiB/s (111MB/s)(1055MiB/10004msec); 0 zone resets 00:22:46.667 slat (usec): min=14, max=1951, avg=29.77, stdev=29.71 00:22:46.667 clat (usec): min=106, max=10434, avg=784.48, stdev=285.66 00:22:46.667 lat (usec): min=133, max=10451, avg=814.25, stdev=288.35 00:22:46.667 clat percentiles (usec): 00:22:46.667 | 50.000th=[ 783], 99.000th=[ 1532], 99.900th=[ 2737], 99.990th=[ 4752], 00:22:46.667 | 99.999th=[10421] 00:22:46.667 bw ( KiB/s): min=92264, max=134080, per=100.00%, avg=108193.47, stdev=1889.60, samples=114 00:22:46.667 iops : min=23066, max=33519, avg=27048.05, stdev=472.37, samples=114 00:22:46.667 lat (usec) : 250=1.97%, 500=17.04%, 750=32.28%, 1000=34.81% 00:22:46.667 lat (msec) : 2=13.74%, 4=0.15%, 10=0.01%, 20=0.01% 00:22:46.667 cpu : usr=59.98%, sys=26.28%, ctx=6836, majf=0, minf=23100 00:22:46.667 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:46.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.667 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.667 issued rwts: total=265635,270102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.667 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:46.667 00:22:46.667 Run status group 0 (all jobs): 00:22:46.667 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=1038MiB (1088MB), run=10004-10004msec 00:22:46.667 WRITE: bw=105MiB/s (111MB/s), 105MiB/s-105MiB/s (111MB/s-111MB/s), io=1055MiB (1106MB), run=10004-10004msec 00:22:46.667 ----------------------------------------------------- 00:22:46.667 Suppressions used: 00:22:46.667 count bytes template 00:22:46.667 6 48 /usr/src/fio/parse.c 00:22:46.667 4289 411744 /usr/src/fio/iolog.c 00:22:46.667 1 8 libtcmalloc_minimal.so 00:22:46.667 1 904 libcrypto.so 00:22:46.667 ----------------------------------------------------- 00:22:46.667 00:22:46.667 00:22:46.667 real 0m12.655s 00:22:46.667 user 0m38.077s 00:22:46.667 sys 0m16.178s 00:22:46.667 13:17:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.667 13:17:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:46.667 ************************************ 00:22:46.667 END TEST bdev_fio_rw_verify 00:22:46.667 ************************************ 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:22:46.925 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "73f85ef4-59da-4a2f-ba68-79fbe8423e81"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "73f85ef4-59da-4a2f-ba68-79fbe8423e81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1db03f47-d645-4837-8216-f0c63849989e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1db03f47-d645-4837-8216-f0c63849989e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "a4fa627d-ff18-46b3-9b0b-c02d6a696952"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4fa627d-ff18-46b3-9b0b-c02d6a696952",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "ce5fc5be-1b9b-4624-a57e-0d9ea9cf22bf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce5fc5be-1b9b-4624-a57e-0d9ea9cf22bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "304912cf-abc3-43ff-a220-e64b716415df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "304912cf-abc3-43ff-a220-e64b716415df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "95909caa-b42e-4dd1-a8dd-c3c37dedb3aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "95909caa-b42e-4dd1-a8dd-c3c37dedb3aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.926 /home/vagrant/spdk_repo/spdk 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:46.926 00:22:46.926 real 0m12.843s 00:22:46.926 user 0m38.184s 00:22:46.926 sys 0m16.259s 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.926 ************************************ 00:22:46.926 END TEST bdev_fio 00:22:46.926 ************************************ 00:22:46.926 13:17:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:46.926 13:17:33 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:46.926 13:17:33 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:46.926 13:17:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:46.926 13:17:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.926 13:17:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:46.926 ************************************ 00:22:46.926 START TEST bdev_verify 00:22:46.926 ************************************ 00:22:46.926 13:17:33 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:47.183 [2024-12-06 13:17:33.941315] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:47.183 [2024-12-06 13:17:33.941486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74731 ] 00:22:47.183 [2024-12-06 13:17:34.118786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:47.442 [2024-12-06 13:17:34.251765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.442 [2024-12-06 13:17:34.251765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.007 Running I/O for 5 seconds... 00:22:50.310 22592.00 IOPS, 88.25 MiB/s [2024-12-06T13:17:38.258Z] 22068.50 IOPS, 86.21 MiB/s [2024-12-06T13:17:39.190Z] 21632.67 IOPS, 84.50 MiB/s [2024-12-06T13:17:40.125Z] 21312.50 IOPS, 83.25 MiB/s 00:22:53.109 Latency(us) 00:22:53.109 [2024-12-06T13:17:40.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.109 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x0 length 0x20000 00:22:53.109 nvme0n1 : 5.07 1465.65 5.73 0.00 0.00 87173.38 9711.24 97231.59 00:22:53.109 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x20000 length 0x20000 00:22:53.109 nvme0n1 : 5.06 1643.35 6.42 0.00 0.00 77742.61 5928.03 81502.95 00:22:53.109 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x0 length 0x80000 00:22:53.109 nvme1n1 : 5.07 1464.80 5.72 0.00 0.00 87056.54 14417.92 86269.21 00:22:53.109 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x80000 length 0x80000 00:22:53.109 nvme1n1 : 5.06 1642.77 6.42 0.00 0.00 77628.08 12451.84 73876.95 00:22:53.109 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x0 length 0x80000 00:22:53.109 nvme1n2 : 5.06 1466.72 5.73 0.00 0.00 86786.21 12988.04 77213.32 00:22:53.109 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x80000 length 0x80000 00:22:53.109 nvme1n2 : 5.07 1642.13 6.41 0.00 0.00 77524.13 12690.15 71017.19 00:22:53.109 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x0 length 0x80000 00:22:53.109 nvme1n3 : 5.07 1463.90 5.72 0.00 0.00 86797.43 13822.14 88652.33 00:22:53.109 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x80000 length 0x80000 00:22:53.109 nvme1n3 : 5.07 1641.38 6.41 0.00 0.00 77419.62 9294.20 83409.45 00:22:53.109 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x0 length 0xa0000 00:22:53.109 nvme2n1 : 5.05 1443.38 5.64 0.00 0.00 87871.20 10783.65 95801.72 00:22:53.109 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0xa0000 length 0xa0000 00:22:53.109 nvme2n1 : 5.05 1544.81 6.03 0.00 0.00 82109.29 10187.87 120109.61 00:22:53.109 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0x0 length 0xbd0bd 00:22:53.109 nvme3n1 : 5.08 2553.34 9.97 0.00 0.00 49514.46 3321.48 99614.72 00:22:53.109 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:53.109 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:22:53.109 nvme3n1 : 5.08 2655.12 10.37 0.00 0.00 47583.97 3768.32 75306.82 00:22:53.109 [2024-12-06T13:17:40.125Z] =================================================================================================================== 00:22:53.109 [2024-12-06T13:17:40.125Z] Total : 20627.35 80.58 0.00 0.00 73951.84 3321.48 120109.61 00:22:54.044 00:22:54.044 real 0m7.213s 00:22:54.044 user 0m11.560s 00:22:54.044 sys 0m1.640s 00:22:54.044 13:17:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.044 13:17:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:54.044 ************************************ 00:22:54.044 END TEST bdev_verify 00:22:54.044 ************************************ 00:22:54.303 13:17:41 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:54.303 13:17:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:54.303 13:17:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.303 13:17:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.303 ************************************ 00:22:54.303 START TEST bdev_verify_big_io 00:22:54.303 ************************************ 00:22:54.303 13:17:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:54.303 [2024-12-06 13:17:41.174885] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:54.303 [2024-12-06 13:17:41.175056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74831 ] 00:22:54.561 [2024-12-06 13:17:41.359392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:54.561 [2024-12-06 13:17:41.525156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.561 [2024-12-06 13:17:41.525174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.494 Running I/O for 5 seconds... 00:23:00.678 368.00 IOPS, 23.00 MiB/s [2024-12-06T13:17:48.258Z] 1716.50 IOPS, 107.28 MiB/s [2024-12-06T13:17:48.258Z] 3035.67 IOPS, 189.73 MiB/s 00:23:01.242 Latency(us) 00:23:01.242 [2024-12-06T13:17:48.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.242 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x0 length 0x2000 00:23:01.242 nvme0n1 : 5.61 155.40 9.71 0.00 0.00 802908.72 15013.70 1136275.08 00:23:01.242 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x2000 length 0x2000 00:23:01.242 nvme0n1 : 5.97 101.80 6.36 0.00 0.00 1164697.01 35508.60 1609087.53 00:23:01.242 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x0 length 0x8000 00:23:01.242 nvme1n1 : 5.95 143.94 9.00 0.00 0.00 810973.20 138221.38 1273543.21 00:23:01.242 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x8000 length 0x8000 00:23:01.242 nvme1n1 : 5.98 133.87 8.37 0.00 0.00 863967.34 55765.18 957063.91 00:23:01.242 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x0 length 0x8000 00:23:01.242 nvme1n2 : 5.82 120.02 7.50 0.00 0.00 964607.98 173491.67 2486078.37 00:23:01.242 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x8000 length 0x8000 00:23:01.242 nvme1n2 : 6.00 117.27 7.33 0.00 0.00 996770.23 20494.89 1342177.28 00:23:01.242 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x0 length 0x8000 00:23:01.242 nvme1n3 : 6.00 117.29 7.33 0.00 0.00 955564.30 61484.68 3065654.92 00:23:01.242 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x8000 length 0x8000 00:23:01.242 nvme1n3 : 5.99 116.35 7.27 0.00 0.00 978288.90 4408.79 1837867.75 00:23:01.242 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x0 length 0xa000 00:23:01.242 nvme2n1 : 6.02 156.93 9.81 0.00 0.00 701671.50 10664.49 934185.89 00:23:01.242 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0xa000 length 0xa000 00:23:01.242 nvme2n1 : 6.01 135.68 8.48 0.00 0.00 810849.20 18588.39 1098145.05 00:23:01.242 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0x0 length 0xbd0b 00:23:01.242 nvme3n1 : 6.03 191.14 11.95 0.00 0.00 561786.71 6076.97 1250665.19 00:23:01.242 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:01.242 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:01.242 nvme3n1 : 6.00 146.69 9.17 0.00 0.00 725452.95 8519.68 1067641.02 00:23:01.242 [2024-12-06T13:17:48.258Z] =================================================================================================================== 00:23:01.242 [2024-12-06T13:17:48.258Z] Total : 1636.38 102.27 0.00 0.00 835767.56 4408.79 3065654.92 00:23:03.139 00:23:03.140 real 0m8.663s 00:23:03.140 user 0m15.694s 00:23:03.140 sys 0m0.607s 00:23:03.140 13:17:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.140 13:17:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:03.140 ************************************ 00:23:03.140 END TEST bdev_verify_big_io 00:23:03.140 ************************************ 00:23:03.140 13:17:49 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:03.140 13:17:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:03.140 13:17:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.140 13:17:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:03.140 ************************************ 00:23:03.140 START TEST bdev_write_zeroes 00:23:03.140 ************************************ 00:23:03.140 13:17:49 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:03.140 [2024-12-06 13:17:49.904419] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:03.140 [2024-12-06 13:17:49.904612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74947 ] 00:23:03.140 [2024-12-06 13:17:50.088013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.400 [2024-12-06 13:17:50.219359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.974 Running I/O for 1 seconds... 00:23:04.907 66240.00 IOPS, 258.75 MiB/s 00:23:04.907 Latency(us) 00:23:04.907 [2024-12-06T13:17:51.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.907 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:04.907 nvme0n1 : 1.02 10004.58 39.08 0.00 0.00 12780.34 6791.91 28716.68 00:23:04.907 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:04.907 nvme1n1 : 1.03 9990.12 39.02 0.00 0.00 12787.14 6732.33 29074.15 00:23:04.907 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:04.907 nvme1n2 : 1.03 9974.97 38.96 0.00 0.00 12794.80 6732.33 29550.78 00:23:04.907 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:04.907 nvme1n3 : 1.03 9960.60 38.91 0.00 0.00 12802.22 6642.97 30027.40 00:23:04.907 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:04.907 nvme2n1 : 1.04 10015.58 39.12 0.00 0.00 12720.47 6315.29 25856.93 00:23:04.907 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:04.907 nvme3n1 : 1.03 15418.36 60.23 0.00 0.00 8253.91 2993.80 21090.68 00:23:04.907 [2024-12-06T13:17:51.923Z] =================================================================================================================== 00:23:04.907 [2024-12-06T13:17:51.923Z] Total : 65364.21 255.33 0.00 0.00 11705.29 2993.80 30027.40 00:23:05.842 00:23:05.842 real 0m3.036s 00:23:05.842 user 0m2.195s 00:23:05.842 sys 0m0.650s 00:23:05.842 13:17:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.842 13:17:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:05.842 ************************************ 00:23:05.842 END TEST bdev_write_zeroes 00:23:05.842 ************************************ 00:23:06.100 13:17:52 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:06.100 13:17:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:06.100 13:17:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.100 13:17:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:06.100 ************************************ 00:23:06.100 START TEST bdev_json_nonenclosed 00:23:06.100 ************************************ 00:23:06.100 13:17:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:06.100 [2024-12-06 13:17:52.976112] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:06.100 [2024-12-06 13:17:52.976287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75008 ] 00:23:06.358 [2024-12-06 13:17:53.152997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.359 [2024-12-06 13:17:53.281931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.359 [2024-12-06 13:17:53.282054] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:06.359 [2024-12-06 13:17:53.282085] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:06.359 [2024-12-06 13:17:53.282099] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:06.617 00:23:06.617 real 0m0.662s 00:23:06.617 user 0m0.422s 00:23:06.617 sys 0m0.135s 00:23:06.617 13:17:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.617 13:17:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:06.617 ************************************ 00:23:06.617 END TEST bdev_json_nonenclosed 00:23:06.617 ************************************ 00:23:06.617 13:17:53 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:06.617 13:17:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:06.617 13:17:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.617 13:17:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:06.617 ************************************ 00:23:06.617 START TEST bdev_json_nonarray 00:23:06.617 ************************************ 00:23:06.617 13:17:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:06.875 [2024-12-06 13:17:53.697729] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:06.875 [2024-12-06 13:17:53.697950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75034 ] 00:23:06.875 [2024-12-06 13:17:53.887810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.134 [2024-12-06 13:17:54.046118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.134 [2024-12-06 13:17:54.046292] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:07.134 [2024-12-06 13:17:54.046329] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:07.134 [2024-12-06 13:17:54.046346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:07.391 00:23:07.391 real 0m0.722s 00:23:07.391 user 0m0.468s 00:23:07.391 sys 0m0.149s 00:23:07.391 13:17:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.391 13:17:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:07.391 ************************************ 00:23:07.391 END TEST bdev_json_nonarray 00:23:07.391 ************************************ 00:23:07.391 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:23:07.391 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:23:07.391 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:23:07.391 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:23:07.391 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:23:07.392 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:07.392 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:07.392 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:23:07.392 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:23:07.392 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:23:07.392 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:23:07.392 13:17:54 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:07.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:08.897 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:08.897 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:08.897 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:08.897 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:08.897 00:23:08.897 real 0m59.628s 00:23:08.897 user 1m44.742s 00:23:08.897 sys 0m27.870s 00:23:08.897 13:17:55 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.897 13:17:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:08.897 ************************************ 00:23:08.897 END TEST blockdev_xnvme 00:23:08.897 ************************************ 00:23:09.169 13:17:55 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:09.169 13:17:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:09.169 13:17:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.169 13:17:55 -- common/autotest_common.sh@10 -- # set +x 00:23:09.169 ************************************ 00:23:09.169 START TEST ublk 00:23:09.169 ************************************ 00:23:09.169 13:17:55 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:09.169 * Looking for test storage... 00:23:09.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.169 13:17:56 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.169 13:17:56 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.169 13:17:56 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.169 13:17:56 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.169 13:17:56 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.169 13:17:56 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.169 13:17:56 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.169 13:17:56 ublk -- scripts/common.sh@344 -- # case "$op" in 00:23:09.169 13:17:56 ublk -- scripts/common.sh@345 -- # : 1 00:23:09.169 13:17:56 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.169 13:17:56 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.169 13:17:56 ublk -- scripts/common.sh@365 -- # decimal 1 00:23:09.169 13:17:56 ublk -- scripts/common.sh@353 -- # local d=1 00:23:09.169 13:17:56 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.169 13:17:56 ublk -- scripts/common.sh@355 -- # echo 1 00:23:09.169 13:17:56 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.169 13:17:56 ublk -- scripts/common.sh@366 -- # decimal 2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@353 -- # local d=2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.169 13:17:56 ublk -- scripts/common.sh@355 -- # echo 2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.169 13:17:56 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.169 13:17:56 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.169 13:17:56 ublk -- scripts/common.sh@368 -- # return 0 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:09.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.169 --rc genhtml_branch_coverage=1 00:23:09.169 --rc genhtml_function_coverage=1 00:23:09.169 --rc genhtml_legend=1 00:23:09.169 --rc geninfo_all_blocks=1 00:23:09.169 --rc geninfo_unexecuted_blocks=1 00:23:09.169 00:23:09.169 ' 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:09.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.169 --rc genhtml_branch_coverage=1 00:23:09.169 --rc genhtml_function_coverage=1 00:23:09.169 --rc genhtml_legend=1 00:23:09.169 --rc geninfo_all_blocks=1 00:23:09.169 --rc geninfo_unexecuted_blocks=1 00:23:09.169 00:23:09.169 ' 00:23:09.169 13:17:56 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:09.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.169 --rc genhtml_branch_coverage=1 00:23:09.169 --rc genhtml_function_coverage=1 00:23:09.169 --rc genhtml_legend=1 00:23:09.170 --rc geninfo_all_blocks=1 00:23:09.170 --rc geninfo_unexecuted_blocks=1 00:23:09.170 00:23:09.170 ' 00:23:09.170 13:17:56 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:09.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.170 --rc genhtml_branch_coverage=1 00:23:09.170 --rc genhtml_function_coverage=1 00:23:09.170 --rc genhtml_legend=1 00:23:09.170 --rc geninfo_all_blocks=1 00:23:09.170 --rc geninfo_unexecuted_blocks=1 00:23:09.170 00:23:09.170 ' 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:09.170 13:17:56 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:09.170 13:17:56 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:09.170 13:17:56 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:09.170 13:17:56 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:09.170 13:17:56 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:09.170 13:17:56 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:09.170 13:17:56 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:09.170 13:17:56 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:23:09.170 13:17:56 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:23:09.170 13:17:56 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:09.170 13:17:56 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.170 13:17:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:09.170 ************************************ 00:23:09.170 START TEST test_save_ublk_config 00:23:09.170 ************************************ 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75323 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75323 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75323 ']' 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.170 13:17:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:09.428 [2024-12-06 13:17:56.262548] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:09.428 [2024-12-06 13:17:56.262735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75323 ] 00:23:09.687 [2024-12-06 13:17:56.447324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.687 [2024-12-06 13:17:56.575950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:10.622 [2024-12-06 13:17:57.467162] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:10.622 [2024-12-06 13:17:57.468338] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:10.622 malloc0 00:23:10.622 [2024-12-06 13:17:57.555331] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:10.622 [2024-12-06 13:17:57.555456] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:10.622 [2024-12-06 13:17:57.555476] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:10.622 [2024-12-06 13:17:57.555486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:10.622 [2024-12-06 13:17:57.563345] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:10.622 [2024-12-06 13:17:57.563378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:10.622 [2024-12-06 13:17:57.571163] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:10.622 [2024-12-06 13:17:57.571305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:10.622 [2024-12-06 13:17:57.588155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:10.622 0 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.622 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:10.881 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.881 13:17:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:23:10.881 "subsystems": [ 00:23:10.881 { 00:23:10.881 "subsystem": "fsdev", 00:23:10.881 "config": [ 00:23:10.881 { 00:23:10.881 "method": "fsdev_set_opts", 00:23:10.881 "params": { 00:23:10.881 "fsdev_io_pool_size": 65535, 00:23:10.881 "fsdev_io_cache_size": 256 00:23:10.881 } 00:23:10.881 } 00:23:10.881 ] 00:23:10.881 }, 00:23:10.881 { 00:23:10.881 "subsystem": "keyring", 00:23:10.881 "config": [] 00:23:10.881 }, 00:23:10.881 { 00:23:10.881 "subsystem": "iobuf", 00:23:10.881 "config": [ 00:23:10.881 { 00:23:10.881 "method": "iobuf_set_options", 00:23:10.881 "params": { 00:23:10.881 "small_pool_count": 8192, 00:23:10.881 "large_pool_count": 1024, 00:23:10.881 "small_bufsize": 8192, 00:23:10.881 "large_bufsize": 135168, 00:23:10.881 "enable_numa": false 00:23:10.881 } 00:23:10.881 } 00:23:10.881 ] 00:23:10.881 }, 00:23:10.881 { 00:23:10.881 "subsystem": "sock", 00:23:10.881 "config": [ 00:23:10.881 { 00:23:10.881 "method": "sock_set_default_impl", 00:23:10.881 "params": { 00:23:10.881 "impl_name": "posix" 00:23:10.881 } 00:23:10.881 }, 00:23:10.881 { 00:23:10.881 "method": "sock_impl_set_options", 00:23:10.881 "params": { 00:23:10.881 "impl_name": "ssl", 00:23:10.881 "recv_buf_size": 4096, 00:23:10.881 "send_buf_size": 4096, 00:23:10.881 "enable_recv_pipe": true, 00:23:10.881 "enable_quickack": false, 00:23:10.881 "enable_placement_id": 0, 00:23:10.881 "enable_zerocopy_send_server": true, 00:23:10.881 "enable_zerocopy_send_client": false, 00:23:10.881 "zerocopy_threshold": 0, 00:23:10.881 "tls_version": 0, 00:23:10.881 "enable_ktls": false 00:23:10.881 } 00:23:10.881 }, 00:23:10.881 { 00:23:10.881 "method": "sock_impl_set_options", 00:23:10.881 "params": { 00:23:10.881 "impl_name": "posix", 00:23:10.881 "recv_buf_size": 2097152, 00:23:10.881 "send_buf_size": 2097152, 00:23:10.881 "enable_recv_pipe": true, 00:23:10.882 "enable_quickack": false, 00:23:10.882 "enable_placement_id": 0, 00:23:10.882 "enable_zerocopy_send_server": true, 00:23:10.882 "enable_zerocopy_send_client": false, 00:23:10.882 "zerocopy_threshold": 0, 00:23:10.882 "tls_version": 0, 00:23:10.882 "enable_ktls": false 00:23:10.882 } 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "vmd", 00:23:10.882 "config": [] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "accel", 00:23:10.882 "config": [ 00:23:10.882 { 00:23:10.882 "method": "accel_set_options", 00:23:10.882 "params": { 00:23:10.882 "small_cache_size": 128, 00:23:10.882 "large_cache_size": 16, 00:23:10.882 "task_count": 2048, 00:23:10.882 "sequence_count": 2048, 00:23:10.882 "buf_count": 2048 00:23:10.882 } 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "bdev", 00:23:10.882 "config": [ 00:23:10.882 { 00:23:10.882 "method": "bdev_set_options", 00:23:10.882 "params": { 00:23:10.882 "bdev_io_pool_size": 65535, 00:23:10.882 "bdev_io_cache_size": 256, 00:23:10.882 "bdev_auto_examine": true, 00:23:10.882 "iobuf_small_cache_size": 128, 00:23:10.882 "iobuf_large_cache_size": 16 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "bdev_raid_set_options", 00:23:10.882 "params": { 00:23:10.882 "process_window_size_kb": 1024, 00:23:10.882 "process_max_bandwidth_mb_sec": 0 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "bdev_iscsi_set_options", 00:23:10.882 "params": { 00:23:10.882 "timeout_sec": 30 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "bdev_nvme_set_options", 00:23:10.882 "params": { 00:23:10.882 "action_on_timeout": "none", 00:23:10.882 "timeout_us": 0, 00:23:10.882 "timeout_admin_us": 0, 00:23:10.882 "keep_alive_timeout_ms": 10000, 00:23:10.882 "arbitration_burst": 0, 00:23:10.882 "low_priority_weight": 0, 00:23:10.882 "medium_priority_weight": 0, 00:23:10.882 "high_priority_weight": 0, 00:23:10.882 "nvme_adminq_poll_period_us": 10000, 00:23:10.882 "nvme_ioq_poll_period_us": 0, 00:23:10.882 "io_queue_requests": 0, 00:23:10.882 "delay_cmd_submit": true, 00:23:10.882 "transport_retry_count": 4, 00:23:10.882 "bdev_retry_count": 3, 00:23:10.882 "transport_ack_timeout": 0, 00:23:10.882 "ctrlr_loss_timeout_sec": 0, 00:23:10.882 "reconnect_delay_sec": 0, 00:23:10.882 "fast_io_fail_timeout_sec": 0, 00:23:10.882 "disable_auto_failback": false, 00:23:10.882 "generate_uuids": false, 00:23:10.882 "transport_tos": 0, 00:23:10.882 "nvme_error_stat": false, 00:23:10.882 "rdma_srq_size": 0, 00:23:10.882 "io_path_stat": false, 00:23:10.882 "allow_accel_sequence": false, 00:23:10.882 "rdma_max_cq_size": 0, 00:23:10.882 "rdma_cm_event_timeout_ms": 0, 00:23:10.882 "dhchap_digests": [ 00:23:10.882 "sha256", 00:23:10.882 "sha384", 00:23:10.882 "sha512" 00:23:10.882 ], 00:23:10.882 "dhchap_dhgroups": [ 00:23:10.882 "null", 00:23:10.882 "ffdhe2048", 00:23:10.882 "ffdhe3072", 00:23:10.882 "ffdhe4096", 00:23:10.882 "ffdhe6144", 00:23:10.882 "ffdhe8192" 00:23:10.882 ] 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "bdev_nvme_set_hotplug", 00:23:10.882 "params": { 00:23:10.882 "period_us": 100000, 00:23:10.882 "enable": false 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "bdev_malloc_create", 00:23:10.882 "params": { 00:23:10.882 "name": "malloc0", 00:23:10.882 "num_blocks": 8192, 00:23:10.882 "block_size": 4096, 00:23:10.882 "physical_block_size": 4096, 00:23:10.882 "uuid": "79814336-a057-4ae9-9fc3-cc9d93dc240f", 00:23:10.882 "optimal_io_boundary": 0, 00:23:10.882 "md_size": 0, 00:23:10.882 "dif_type": 0, 00:23:10.882 "dif_is_head_of_md": false, 00:23:10.882 "dif_pi_format": 0 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "bdev_wait_for_examine" 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "scsi", 00:23:10.882 "config": null 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "scheduler", 00:23:10.882 "config": [ 00:23:10.882 { 00:23:10.882 "method": "framework_set_scheduler", 00:23:10.882 "params": { 00:23:10.882 "name": "static" 00:23:10.882 } 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "vhost_scsi", 00:23:10.882 "config": [] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "vhost_blk", 00:23:10.882 "config": [] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "ublk", 00:23:10.882 "config": [ 00:23:10.882 { 00:23:10.882 "method": "ublk_create_target", 00:23:10.882 "params": { 00:23:10.882 "cpumask": "1" 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "ublk_start_disk", 00:23:10.882 "params": { 00:23:10.882 "bdev_name": "malloc0", 00:23:10.882 "ublk_id": 0, 00:23:10.882 "num_queues": 1, 00:23:10.882 "queue_depth": 128 00:23:10.882 } 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "nbd", 00:23:10.882 "config": [] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "nvmf", 00:23:10.882 "config": [ 00:23:10.882 { 00:23:10.882 "method": "nvmf_set_config", 00:23:10.882 "params": { 00:23:10.882 "discovery_filter": "match_any", 00:23:10.882 "admin_cmd_passthru": { 00:23:10.882 "identify_ctrlr": false 00:23:10.882 }, 00:23:10.882 "dhchap_digests": [ 00:23:10.882 "sha256", 00:23:10.882 "sha384", 00:23:10.882 "sha512" 00:23:10.882 ], 00:23:10.882 "dhchap_dhgroups": [ 00:23:10.882 "null", 00:23:10.882 "ffdhe2048", 00:23:10.882 "ffdhe3072", 00:23:10.882 "ffdhe4096", 00:23:10.882 "ffdhe6144", 00:23:10.882 "ffdhe8192" 00:23:10.882 ] 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "nvmf_set_max_subsystems", 00:23:10.882 "params": { 00:23:10.882 "max_subsystems": 1024 00:23:10.882 } 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "method": "nvmf_set_crdt", 00:23:10.882 "params": { 00:23:10.882 "crdt1": 0, 00:23:10.882 "crdt2": 0, 00:23:10.882 "crdt3": 0 00:23:10.882 } 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 }, 00:23:10.882 { 00:23:10.882 "subsystem": "iscsi", 00:23:10.882 "config": [ 00:23:10.882 { 00:23:10.882 "method": "iscsi_set_options", 00:23:10.882 "params": { 00:23:10.882 "node_base": "iqn.2016-06.io.spdk", 00:23:10.882 "max_sessions": 128, 00:23:10.882 "max_connections_per_session": 2, 00:23:10.882 "max_queue_depth": 64, 00:23:10.882 "default_time2wait": 2, 00:23:10.882 "default_time2retain": 20, 00:23:10.882 "first_burst_length": 8192, 00:23:10.882 "immediate_data": true, 00:23:10.882 "allow_duplicated_isid": false, 00:23:10.882 "error_recovery_level": 0, 00:23:10.882 "nop_timeout": 60, 00:23:10.882 "nop_in_interval": 30, 00:23:10.882 "disable_chap": false, 00:23:10.882 "require_chap": false, 00:23:10.882 "mutual_chap": false, 00:23:10.882 "chap_group": 0, 00:23:10.882 "max_large_datain_per_connection": 64, 00:23:10.882 "max_r2t_per_connection": 4, 00:23:10.882 "pdu_pool_size": 36864, 00:23:10.882 "immediate_data_pool_size": 16384, 00:23:10.882 "data_out_pool_size": 2048 00:23:10.882 } 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 } 00:23:10.882 ] 00:23:10.882 }' 00:23:10.882 13:17:57 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75323 00:23:10.882 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75323 ']' 00:23:10.882 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75323 00:23:10.882 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:10.882 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.882 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75323 00:23:11.141 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.141 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.141 killing process with pid 75323 00:23:11.141 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75323' 00:23:11.141 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75323 00:23:11.141 13:17:57 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75323 00:23:12.518 [2024-12-06 13:17:59.266468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:12.518 [2024-12-06 13:17:59.310192] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:12.518 [2024-12-06 13:17:59.310358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:12.518 [2024-12-06 13:17:59.319240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:12.518 [2024-12-06 13:17:59.319319] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:12.518 [2024-12-06 13:17:59.319347] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:12.518 [2024-12-06 13:17:59.319374] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:12.518 [2024-12-06 13:17:59.319553] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:14.419 13:18:01 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75389 00:23:14.419 13:18:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:23:14.419 13:18:01 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75389 00:23:14.419 13:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75389 ']' 00:23:14.419 13:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.419 13:18:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:23:14.419 "subsystems": [ 00:23:14.419 { 00:23:14.419 "subsystem": "fsdev", 00:23:14.419 "config": [ 00:23:14.419 { 00:23:14.419 "method": "fsdev_set_opts", 00:23:14.419 "params": { 00:23:14.419 "fsdev_io_pool_size": 65535, 00:23:14.419 "fsdev_io_cache_size": 256 00:23:14.419 } 00:23:14.419 } 00:23:14.419 ] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "keyring", 00:23:14.419 "config": [] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "iobuf", 00:23:14.419 "config": [ 00:23:14.419 { 00:23:14.419 "method": "iobuf_set_options", 00:23:14.419 "params": { 00:23:14.419 "small_pool_count": 8192, 00:23:14.419 "large_pool_count": 1024, 00:23:14.419 "small_bufsize": 8192, 00:23:14.419 "large_bufsize": 135168, 00:23:14.419 "enable_numa": false 00:23:14.419 } 00:23:14.419 } 00:23:14.419 ] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "sock", 00:23:14.419 "config": [ 00:23:14.419 { 00:23:14.419 "method": "sock_set_default_impl", 00:23:14.419 "params": { 00:23:14.419 "impl_name": "posix" 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "sock_impl_set_options", 00:23:14.419 "params": { 00:23:14.419 "impl_name": "ssl", 00:23:14.419 "recv_buf_size": 4096, 00:23:14.419 "send_buf_size": 4096, 00:23:14.419 "enable_recv_pipe": true, 00:23:14.419 "enable_quickack": false, 00:23:14.419 "enable_placement_id": 0, 00:23:14.419 "enable_zerocopy_send_server": true, 00:23:14.419 "enable_zerocopy_send_client": false, 00:23:14.419 "zerocopy_threshold": 0, 00:23:14.419 "tls_version": 0, 00:23:14.419 "enable_ktls": false 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "sock_impl_set_options", 00:23:14.419 "params": { 00:23:14.419 "impl_name": "posix", 00:23:14.419 "recv_buf_size": 2097152, 00:23:14.419 "send_buf_size": 2097152, 00:23:14.419 "enable_recv_pipe": true, 00:23:14.419 "enable_quickack": false, 00:23:14.419 "enable_placement_id": 0, 00:23:14.419 "enable_zerocopy_send_server": true, 00:23:14.419 "enable_zerocopy_send_client": false, 00:23:14.419 "zerocopy_threshold": 0, 00:23:14.419 "tls_version": 0, 00:23:14.419 "enable_ktls": false 00:23:14.419 } 00:23:14.419 } 00:23:14.419 ] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "vmd", 00:23:14.419 "config": [] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "accel", 00:23:14.419 "config": [ 00:23:14.419 { 00:23:14.419 "method": "accel_set_options", 00:23:14.419 "params": { 00:23:14.419 "small_cache_size": 128, 00:23:14.419 "large_cache_size": 16, 00:23:14.419 "task_count": 2048, 00:23:14.419 "sequence_count": 2048, 00:23:14.419 "buf_count": 2048 00:23:14.419 } 00:23:14.419 } 00:23:14.419 ] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "bdev", 00:23:14.419 "config": [ 00:23:14.419 { 00:23:14.419 "method": "bdev_set_options", 00:23:14.419 "params": { 00:23:14.419 "bdev_io_pool_size": 65535, 00:23:14.419 "bdev_io_cache_size": 256, 00:23:14.419 "bdev_auto_examine": true, 00:23:14.419 "iobuf_small_cache_size": 128, 00:23:14.419 "iobuf_large_cache_size": 16 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "bdev_raid_set_options", 00:23:14.419 "params": { 00:23:14.419 "process_window_size_kb": 1024, 00:23:14.419 "process_max_bandwidth_mb_sec": 0 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "bdev_iscsi_set_options", 00:23:14.419 "params": { 00:23:14.419 "timeout_sec": 30 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "bdev_nvme_set_options", 00:23:14.419 "params": { 00:23:14.419 "action_on_timeout": "none", 00:23:14.419 "timeout_us": 0, 00:23:14.419 "timeout_admin_us": 0, 00:23:14.419 "keep_alive_timeout_ms": 10000, 00:23:14.419 "arbitration_burst": 0, 00:23:14.419 "low_priority_weight": 0, 00:23:14.419 "medium_priority_weight": 0, 00:23:14.419 "high_priority_weight": 0, 00:23:14.419 "nvme_adminq_poll_period_us": 10000, 00:23:14.419 "nvme_ioq_poll_period_us": 0, 00:23:14.419 "io_queue_requests": 0, 00:23:14.419 "delay_cmd_submit": true, 00:23:14.419 "transport_retry_count": 4, 00:23:14.419 "bdev_retry_count": 3, 00:23:14.419 "transport_ack_timeout": 0, 00:23:14.419 "ctrlr_loss_timeout_sec": 0, 00:23:14.419 "reconnect_delay_sec": 0, 00:23:14.419 "fast_io_fail_timeout_sec": 0, 00:23:14.419 "disable_auto_failback": false, 00:23:14.419 "generate_uuids": false, 00:23:14.419 "transport_tos": 0, 00:23:14.419 "nvme_error_stat": false, 00:23:14.419 "rdma_srq_size": 0, 00:23:14.419 "io_path_stat": false, 00:23:14.419 "allow_accel_sequence": false, 00:23:14.419 "rdma_max_cq_size": 0, 00:23:14.419 "rdma_cm_event_timeout_ms": 0, 00:23:14.419 "dhchap_digests": [ 00:23:14.419 "sha256", 00:23:14.419 "sha384", 00:23:14.419 "sha512" 00:23:14.419 ], 00:23:14.419 "dhchap_dhgroups": [ 00:23:14.419 "null", 00:23:14.419 "ffdhe2048", 00:23:14.419 "ffdhe3072", 00:23:14.419 "ffdhe4096", 00:23:14.419 "ffdhe6144", 00:23:14.419 "ffdhe8192" 00:23:14.419 ] 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "bdev_nvme_set_hotplug", 00:23:14.419 "params": { 00:23:14.419 "period_us": 100000, 00:23:14.419 "enable": false 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "bdev_malloc_create", 00:23:14.419 "params": { 00:23:14.419 "name": "malloc0", 00:23:14.419 "num_blocks": 8192, 00:23:14.419 "block_size": 4096, 00:23:14.419 "physical_block_size": 4096, 00:23:14.419 "uuid": "79814336-a057-4ae9-9fc3-cc9d93dc240f", 00:23:14.419 "optimal_io_boundary": 0, 00:23:14.419 "md_size": 0, 00:23:14.419 "dif_type": 0, 00:23:14.419 "dif_is_head_of_md": false, 00:23:14.419 "dif_pi_format": 0 00:23:14.419 } 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "method": "bdev_wait_for_examine" 00:23:14.419 } 00:23:14.419 ] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "scsi", 00:23:14.419 "config": null 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "scheduler", 00:23:14.419 "config": [ 00:23:14.419 { 00:23:14.419 "method": "framework_set_scheduler", 00:23:14.419 "params": { 00:23:14.419 "name": "static" 00:23:14.419 } 00:23:14.419 } 00:23:14.419 ] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "vhost_scsi", 00:23:14.419 "config": [] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "vhost_blk", 00:23:14.419 "config": [] 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "subsystem": "ublk", 00:23:14.419 "config": [ 00:23:14.419 { 00:23:14.419 "method": "ublk_create_target", 00:23:14.419 "params": { 00:23:14.419 "cpumask": "1" 00:23:14.419 } 00:23:14.420 }, 00:23:14.420 { 00:23:14.420 "method": "ublk_start_disk", 00:23:14.420 "params": { 00:23:14.420 "bdev_name": "malloc0", 00:23:14.420 "ublk_id": 0, 00:23:14.420 "num_queues": 1, 00:23:14.420 "queue_depth": 128 00:23:14.420 } 00:23:14.420 } 00:23:14.420 ] 00:23:14.420 }, 00:23:14.420 { 00:23:14.420 "subsystem": "nbd", 00:23:14.420 "config": [] 00:23:14.420 }, 00:23:14.420 { 00:23:14.420 "subsystem": "nvmf", 00:23:14.420 "config": [ 00:23:14.420 { 00:23:14.420 "method": "nvmf_set_config", 00:23:14.420 "params": { 00:23:14.420 "discovery_filter": "match_any", 00:23:14.420 "admin_cmd_passthru": { 00:23:14.420 "identify_ctrlr": false 00:23:14.420 }, 00:23:14.420 "dhchap_digests": [ 00:23:14.420 "sha256", 00:23:14.420 "sha384", 00:23:14.420 "sha512" 00:23:14.420 ], 00:23:14.420 "dhchap_dhgroups": [ 00:23:14.420 "null", 00:23:14.420 "ffdhe2048", 00:23:14.420 "ffdhe3072", 00:23:14.420 "ffdhe4096", 00:23:14.420 "ffdhe6144", 00:23:14.420 "ffdhe8192" 00:23:14.420 ] 00:23:14.420 } 00:23:14.420 }, 00:23:14.420 { 00:23:14.420 "method": "nvmf_set_max_subsystems", 00:23:14.420 "params": { 00:23:14.420 "max_subsystems": 1024 00:23:14.420 } 00:23:14.420 }, 00:23:14.420 { 00:23:14.420 "method": "nvmf_set_crdt", 00:23:14.420 "params": { 00:23:14.420 "crdt1": 0, 00:23:14.420 "crdt2": 0, 00:23:14.420 "crdt3": 0 00:23:14.420 } 00:23:14.420 } 00:23:14.420 ] 00:23:14.420 }, 00:23:14.420 { 00:23:14.420 "subsystem": "iscsi", 00:23:14.420 "config": [ 00:23:14.420 { 00:23:14.420 "method": "iscsi_set_options", 00:23:14.420 "params": { 00:23:14.420 "node_base": "iqn.2016-06.io.spdk", 00:23:14.420 "max_sessions": 128, 00:23:14.420 "max_connections_per_session": 2, 00:23:14.420 "max_queue_depth": 64, 00:23:14.420 "default_time2wait": 2, 00:23:14.420 "default_time2retain": 20, 00:23:14.420 "first_burst_length": 8192, 00:23:14.420 "immediate_data": true, 00:23:14.420 "allow_duplicated_isid": false, 00:23:14.420 "error_recovery_level": 0, 00:23:14.420 "nop_timeout": 60, 00:23:14.420 "nop_in_interval": 30, 00:23:14.420 "disable_chap": false, 00:23:14.420 "require_chap": false, 00:23:14.420 "mutual_chap": false, 00:23:14.420 "chap_group": 0, 00:23:14.420 "max_large_datain_per_connection": 64, 00:23:14.420 "max_r2t_per_connection": 4, 00:23:14.420 "pdu_pool_size": 36864, 00:23:14.420 "immediate_data_pool_size": 16384, 00:23:14.420 "data_out_pool_size": 2048 00:23:14.420 } 00:23:14.420 } 00:23:14.420 ] 00:23:14.420 } 00:23:14.420 ] 00:23:14.420 }' 00:23:14.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.420 13:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.420 13:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.420 13:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.420 13:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:14.420 [2024-12-06 13:18:01.199519] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:14.420 [2024-12-06 13:18:01.199674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75389 ] 00:23:14.420 [2024-12-06 13:18:01.372054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.678 [2024-12-06 13:18:01.499397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.612 [2024-12-06 13:18:02.544150] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:15.612 [2024-12-06 13:18:02.545372] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:15.612 [2024-12-06 13:18:02.552368] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:15.612 [2024-12-06 13:18:02.552515] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:15.612 [2024-12-06 13:18:02.552534] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:15.612 [2024-12-06 13:18:02.552544] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:15.612 [2024-12-06 13:18:02.560387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:15.612 [2024-12-06 13:18:02.560448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:15.612 [2024-12-06 13:18:02.568214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:15.612 [2024-12-06 13:18:02.568389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:15.612 [2024-12-06 13:18:02.585192] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75389 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75389 ']' 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75389 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75389 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.871 killing process with pid 75389 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75389' 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75389 00:23:15.871 13:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75389 00:23:17.774 [2024-12-06 13:18:04.504686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:17.774 [2024-12-06 13:18:04.533281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:17.774 [2024-12-06 13:18:04.533491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:17.774 [2024-12-06 13:18:04.541280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:17.774 [2024-12-06 13:18:04.541350] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:17.774 [2024-12-06 13:18:04.541365] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:17.774 [2024-12-06 13:18:04.541405] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:17.774 [2024-12-06 13:18:04.541595] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:19.674 13:18:06 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:23:19.674 00:23:19.674 real 0m10.240s 00:23:19.674 user 0m7.657s 00:23:19.674 sys 0m3.521s 00:23:19.674 13:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.674 13:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:19.674 ************************************ 00:23:19.674 END TEST test_save_ublk_config 00:23:19.674 ************************************ 00:23:19.674 13:18:06 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75476 00:23:19.674 13:18:06 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.674 13:18:06 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:23:19.674 13:18:06 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75476 00:23:19.674 13:18:06 ublk -- common/autotest_common.sh@835 -- # '[' -z 75476 ']' 00:23:19.674 13:18:06 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.674 13:18:06 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.674 13:18:06 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.674 13:18:06 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.674 13:18:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:19.674 [2024-12-06 13:18:06.528876] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:19.674 [2024-12-06 13:18:06.529051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75476 ] 00:23:19.931 [2024-12-06 13:18:06.705911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:19.931 [2024-12-06 13:18:06.838904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.931 [2024-12-06 13:18:06.838921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.864 13:18:07 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.864 13:18:07 ublk -- common/autotest_common.sh@868 -- # return 0 00:23:20.864 13:18:07 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:23:20.864 13:18:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:20.864 13:18:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.864 13:18:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:20.864 ************************************ 00:23:20.864 START TEST test_create_ublk 00:23:20.864 ************************************ 00:23:20.864 13:18:07 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:23:20.864 13:18:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:23:20.864 13:18:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.864 13:18:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:20.864 [2024-12-06 13:18:07.748155] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:20.864 [2024-12-06 13:18:07.751079] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:20.864 13:18:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.864 13:18:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:23:20.864 13:18:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:23:20.864 13:18:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.864 13:18:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 13:18:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:23:21.129 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:23:21.129 13:18:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 13:18:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 [2024-12-06 13:18:08.057326] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:23:21.129 [2024-12-06 13:18:08.057851] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:23:21.129 [2024-12-06 13:18:08.057878] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:21.129 [2024-12-06 13:18:08.057889] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:21.129 [2024-12-06 13:18:08.065187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:21.129 [2024-12-06 13:18:08.065216] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:21.129 [2024-12-06 13:18:08.073167] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:21.129 [2024-12-06 13:18:08.073930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:21.129 [2024-12-06 13:18:08.088265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:21.129 13:18:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:23:21.129 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:23:21.129 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:23:21.129 13:18:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 13:18:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 13:18:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:23:21.129 { 00:23:21.129 "ublk_device": "/dev/ublkb0", 00:23:21.129 "id": 0, 00:23:21.129 "queue_depth": 512, 00:23:21.129 "num_queues": 4, 00:23:21.129 "bdev_name": "Malloc0" 00:23:21.129 } 00:23:21.129 ]' 00:23:21.129 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:23:21.387 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:23:21.388 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:23:21.388 13:18:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:23:21.388 13:18:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:23:21.645 fio: verification read phase will never start because write phase uses all of runtime 00:23:21.645 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:23:21.645 fio-3.35 00:23:21.645 Starting 1 process 00:23:31.618 00:23:31.618 fio_test: (groupid=0, jobs=1): err= 0: pid=75529: Fri Dec 6 13:18:18 2024 00:23:31.618 write: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec); 0 zone resets 00:23:31.618 clat (usec): min=50, max=4038, avg=79.75, stdev=125.15 00:23:31.618 lat (usec): min=50, max=4039, avg=80.49, stdev=125.17 00:23:31.618 clat percentiles (usec): 00:23:31.618 | 1.00th=[ 59], 5.00th=[ 66], 10.00th=[ 68], 20.00th=[ 69], 00:23:31.618 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 74], 00:23:31.618 | 70.00th=[ 76], 80.00th=[ 79], 90.00th=[ 84], 95.00th=[ 89], 00:23:31.618 | 99.00th=[ 108], 99.50th=[ 120], 99.90th=[ 2737], 99.95th=[ 3163], 00:23:31.618 | 99.99th=[ 3785] 00:23:31.618 bw ( KiB/s): min=46520, max=52768, per=100.00%, avg=49355.37, stdev=1610.53, samples=19 00:23:31.618 iops : min=11630, max=13192, avg=12338.84, stdev=402.63, samples=19 00:23:31.618 lat (usec) : 100=98.35%, 250=1.34%, 500=0.01%, 750=0.02%, 1000=0.03% 00:23:31.618 lat (msec) : 2=0.08%, 4=0.17%, 10=0.01% 00:23:31.618 cpu : usr=3.48%, sys=8.11%, ctx=123410, majf=0, minf=795 00:23:31.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.618 issued rwts: total=0,123406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:31.618 00:23:31.618 Run status group 0 (all jobs): 00:23:31.618 WRITE: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:23:31.618 00:23:31.618 Disk stats (read/write): 00:23:31.618 ublkb0: ios=0/122094, merge=0/0, ticks=0/8820, in_queue=8821, util=99.10% 00:23:31.618 13:18:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:23:31.618 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.618 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:31.618 [2024-12-06 13:18:18.612667] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:31.877 [2024-12-06 13:18:18.646812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:31.877 [2024-12-06 13:18:18.647820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:31.877 [2024-12-06 13:18:18.654178] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:31.877 [2024-12-06 13:18:18.654512] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:31.877 [2024-12-06 13:18:18.654546] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.877 13:18:18 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:31.877 [2024-12-06 13:18:18.670267] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:23:31.877 request: 00:23:31.877 { 00:23:31.877 "ublk_id": 0, 00:23:31.877 "method": "ublk_stop_disk", 00:23:31.877 "req_id": 1 00:23:31.877 } 00:23:31.877 Got JSON-RPC error response 00:23:31.877 response: 00:23:31.877 { 00:23:31.877 "code": -19, 00:23:31.877 "message": "No such device" 00:23:31.877 } 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.877 13:18:18 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:31.877 [2024-12-06 13:18:18.686269] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:31.877 [2024-12-06 13:18:18.694143] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:31.877 [2024-12-06 13:18:18.694207] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.877 13:18:18 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.877 13:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:32.464 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.464 13:18:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:23:32.464 13:18:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:32.464 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.464 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:32.464 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.464 13:18:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:23:32.464 13:18:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:23:32.464 13:18:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:23:32.464 13:18:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:23:32.464 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.464 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:32.464 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.464 13:18:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:23:32.464 13:18:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:23:32.761 13:18:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:23:32.761 00:23:32.761 real 0m11.771s 00:23:32.761 user 0m0.808s 00:23:32.761 sys 0m0.909s 00:23:32.761 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.761 ************************************ 00:23:32.761 13:18:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:32.761 END TEST test_create_ublk 00:23:32.761 ************************************ 00:23:32.761 13:18:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:23:32.761 13:18:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.761 13:18:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.761 13:18:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:32.761 ************************************ 00:23:32.761 START TEST test_create_multi_ublk 00:23:32.761 ************************************ 00:23:32.761 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:23:32.761 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:23:32.761 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.761 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:32.762 [2024-12-06 13:18:19.585142] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:32.762 [2024-12-06 13:18:19.587987] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:32.762 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.762 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:23:32.762 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:23:32.762 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:32.762 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:23:32.762 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.762 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:33.020 [2024-12-06 13:18:19.885384] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:23:33.020 [2024-12-06 13:18:19.886003] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:23:33.020 [2024-12-06 13:18:19.886028] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:33.020 [2024-12-06 13:18:19.886045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:33.020 [2024-12-06 13:18:19.893700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:33.020 [2024-12-06 13:18:19.893769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:33.020 [2024-12-06 13:18:19.901220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:33.020 [2024-12-06 13:18:19.902341] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:33.020 [2024-12-06 13:18:19.913267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.020 13:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:33.278 [2024-12-06 13:18:20.212353] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:23:33.278 [2024-12-06 13:18:20.212908] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:23:33.278 [2024-12-06 13:18:20.212935] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:23:33.278 [2024-12-06 13:18:20.212951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:23:33.278 [2024-12-06 13:18:20.220189] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:33.278 [2024-12-06 13:18:20.220220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:33.278 [2024-12-06 13:18:20.228162] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:33.278 [2024-12-06 13:18:20.229011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:23:33.278 [2024-12-06 13:18:20.237214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.278 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:33.537 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.537 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:23:33.537 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:23:33.537 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.537 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:33.537 [2024-12-06 13:18:20.538304] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:23:33.537 [2024-12-06 13:18:20.538849] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:23:33.537 [2024-12-06 13:18:20.538871] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:23:33.537 [2024-12-06 13:18:20.538884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:23:33.537 [2024-12-06 13:18:20.545148] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:33.537 [2024-12-06 13:18:20.545183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:33.795 [2024-12-06 13:18:20.553205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:33.795 [2024-12-06 13:18:20.554054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:23:33.795 [2024-12-06 13:18:20.557597] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:23:33.795 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.795 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:23:33.795 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:33.795 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:23:33.795 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.795 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:34.053 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.053 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:23:34.053 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:23:34.053 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:34.054 [2024-12-06 13:18:20.857376] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:23:34.054 [2024-12-06 13:18:20.857997] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:23:34.054 [2024-12-06 13:18:20.858025] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:23:34.054 [2024-12-06 13:18:20.858036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:23:34.054 [2024-12-06 13:18:20.865195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:34.054 [2024-12-06 13:18:20.865232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:34.054 [2024-12-06 13:18:20.873179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:34.054 [2024-12-06 13:18:20.874010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:23:34.054 [2024-12-06 13:18:20.882293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:23:34.054 { 00:23:34.054 "ublk_device": "/dev/ublkb0", 00:23:34.054 "id": 0, 00:23:34.054 "queue_depth": 512, 00:23:34.054 "num_queues": 4, 00:23:34.054 "bdev_name": "Malloc0" 00:23:34.054 }, 00:23:34.054 { 00:23:34.054 "ublk_device": "/dev/ublkb1", 00:23:34.054 "id": 1, 00:23:34.054 "queue_depth": 512, 00:23:34.054 "num_queues": 4, 00:23:34.054 "bdev_name": "Malloc1" 00:23:34.054 }, 00:23:34.054 { 00:23:34.054 "ublk_device": "/dev/ublkb2", 00:23:34.054 "id": 2, 00:23:34.054 "queue_depth": 512, 00:23:34.054 "num_queues": 4, 00:23:34.054 "bdev_name": "Malloc2" 00:23:34.054 }, 00:23:34.054 { 00:23:34.054 "ublk_device": "/dev/ublkb3", 00:23:34.054 "id": 3, 00:23:34.054 "queue_depth": 512, 00:23:34.054 "num_queues": 4, 00:23:34.054 "bdev_name": "Malloc3" 00:23:34.054 } 00:23:34.054 ]' 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:23:34.054 13:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:23:34.054 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:34.054 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:23:34.312 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:23:34.571 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:34.830 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.089 13:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:35.089 [2024-12-06 13:18:21.935328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:35.089 [2024-12-06 13:18:21.991231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:35.089 [2024-12-06 13:18:21.992264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:35.089 [2024-12-06 13:18:21.999209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:35.089 [2024-12-06 13:18:21.999550] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:35.089 [2024-12-06 13:18:21.999575] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:35.089 [2024-12-06 13:18:22.015312] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:35.089 [2024-12-06 13:18:22.045697] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:35.089 [2024-12-06 13:18:22.046862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:35.089 [2024-12-06 13:18:22.055209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:35.089 [2024-12-06 13:18:22.055540] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:35.089 [2024-12-06 13:18:22.055571] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.089 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:35.089 [2024-12-06 13:18:22.071308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:23:35.348 [2024-12-06 13:18:22.103233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:35.348 [2024-12-06 13:18:22.104186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:23:35.348 [2024-12-06 13:18:22.111190] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:35.348 [2024-12-06 13:18:22.111511] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:23:35.348 [2024-12-06 13:18:22.111538] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:23:35.348 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.348 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:35.348 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:23:35.348 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.348 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:35.348 [2024-12-06 13:18:22.127347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:23:35.348 [2024-12-06 13:18:22.166291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:35.348 [2024-12-06 13:18:22.167200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:23:35.348 [2024-12-06 13:18:22.175373] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:35.348 [2024-12-06 13:18:22.175754] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:23:35.348 [2024-12-06 13:18:22.175779] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:23:35.348 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.349 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:23:35.607 [2024-12-06 13:18:22.487268] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:35.607 [2024-12-06 13:18:22.495157] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:35.607 [2024-12-06 13:18:22.495213] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:35.608 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:23:35.608 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:35.608 13:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:35.608 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.608 13:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:36.175 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.175 13:18:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:36.175 13:18:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:36.175 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.175 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:36.814 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.814 13:18:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:36.814 13:18:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:23:36.814 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.814 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:37.076 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.076 13:18:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:37.076 13:18:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:23:37.076 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.076 13:18:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:23:37.335 ************************************ 00:23:37.335 END TEST test_create_multi_ublk 00:23:37.335 ************************************ 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:23:37.335 00:23:37.335 real 0m4.752s 00:23:37.335 user 0m1.335s 00:23:37.335 sys 0m0.181s 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:37.335 13:18:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:37.594 13:18:24 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:37.594 13:18:24 ublk -- ublk/ublk.sh@147 -- # cleanup 00:23:37.594 13:18:24 ublk -- ublk/ublk.sh@130 -- # killprocess 75476 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@954 -- # '[' -z 75476 ']' 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@958 -- # kill -0 75476 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@959 -- # uname 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75476 00:23:37.594 killing process with pid 75476 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75476' 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@973 -- # kill 75476 00:23:37.594 13:18:24 ublk -- common/autotest_common.sh@978 -- # wait 75476 00:23:38.529 [2024-12-06 13:18:25.462677] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:38.529 [2024-12-06 13:18:25.462750] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:39.903 00:23:39.903 real 0m30.731s 00:23:39.903 user 0m44.114s 00:23:39.903 sys 0m10.709s 00:23:39.903 13:18:26 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.903 ************************************ 00:23:39.903 END TEST ublk 00:23:39.903 ************************************ 00:23:39.903 13:18:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:39.903 13:18:26 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:23:39.903 13:18:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.903 13:18:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.903 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.903 ************************************ 00:23:39.903 START TEST ublk_recovery 00:23:39.903 ************************************ 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:23:39.903 * Looking for test storage... 00:23:39.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.903 13:18:26 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:39.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.903 --rc genhtml_branch_coverage=1 00:23:39.903 --rc genhtml_function_coverage=1 00:23:39.903 --rc genhtml_legend=1 00:23:39.903 --rc geninfo_all_blocks=1 00:23:39.903 --rc geninfo_unexecuted_blocks=1 00:23:39.903 00:23:39.903 ' 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:39.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.903 --rc genhtml_branch_coverage=1 00:23:39.903 --rc genhtml_function_coverage=1 00:23:39.903 --rc genhtml_legend=1 00:23:39.903 --rc geninfo_all_blocks=1 00:23:39.903 --rc geninfo_unexecuted_blocks=1 00:23:39.903 00:23:39.903 ' 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:39.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.903 --rc genhtml_branch_coverage=1 00:23:39.903 --rc genhtml_function_coverage=1 00:23:39.903 --rc genhtml_legend=1 00:23:39.903 --rc geninfo_all_blocks=1 00:23:39.903 --rc geninfo_unexecuted_blocks=1 00:23:39.903 00:23:39.903 ' 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:39.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.903 --rc genhtml_branch_coverage=1 00:23:39.903 --rc genhtml_function_coverage=1 00:23:39.903 --rc genhtml_legend=1 00:23:39.903 --rc geninfo_all_blocks=1 00:23:39.903 --rc geninfo_unexecuted_blocks=1 00:23:39.903 00:23:39.903 ' 00:23:39.903 13:18:26 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:39.903 13:18:26 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:39.903 13:18:26 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:23:39.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.903 13:18:26 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75899 00:23:39.903 13:18:26 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.903 13:18:26 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75899 00:23:39.903 13:18:26 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75899 ']' 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.903 13:18:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.160 [2024-12-06 13:18:27.059515] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:40.160 [2024-12-06 13:18:27.060708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75899 ] 00:23:40.417 [2024-12-06 13:18:27.250806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:40.417 [2024-12-06 13:18:27.388035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.417 [2024-12-06 13:18:27.388040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.348 13:18:28 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.348 13:18:28 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:23:41.348 13:18:28 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:23:41.348 13:18:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.348 13:18:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.348 [2024-12-06 13:18:28.273155] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:41.348 [2024-12-06 13:18:28.276061] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:41.348 13:18:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.348 13:18:28 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:23:41.348 13:18:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.348 13:18:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.605 malloc0 00:23:41.605 13:18:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.605 13:18:28 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:23:41.605 13:18:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.605 13:18:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.605 [2024-12-06 13:18:28.427339] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:23:41.605 [2024-12-06 13:18:28.427482] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:23:41.605 [2024-12-06 13:18:28.427505] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:23:41.605 [2024-12-06 13:18:28.427515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:23:41.605 [2024-12-06 13:18:28.430409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:41.605 [2024-12-06 13:18:28.430441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:41.605 [2024-12-06 13:18:28.438166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:41.605 [2024-12-06 13:18:28.438368] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:23:41.605 [2024-12-06 13:18:28.447140] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:23:41.605 1 00:23:41.605 13:18:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.605 13:18:28 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:23:42.537 13:18:29 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75940 00:23:42.537 13:18:29 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:23:42.537 13:18:29 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:23:42.794 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:42.794 fio-3.35 00:23:42.794 Starting 1 process 00:23:48.120 13:18:34 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75899 00:23:48.120 13:18:34 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:23:53.385 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75899 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:23:53.385 13:18:39 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76040 00:23:53.385 13:18:39 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:23:53.385 13:18:39 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.385 13:18:39 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76040 00:23:53.385 13:18:39 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76040 ']' 00:23:53.385 13:18:39 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.385 13:18:39 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.385 13:18:39 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.385 13:18:39 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.385 13:18:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.385 [2024-12-06 13:18:39.594726] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:53.385 [2024-12-06 13:18:39.594909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76040 ] 00:23:53.385 [2024-12-06 13:18:39.778972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:53.385 [2024-12-06 13:18:39.936053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.385 [2024-12-06 13:18:39.936058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:23:53.952 13:18:40 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.952 [2024-12-06 13:18:40.809152] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:53.952 [2024-12-06 13:18:40.812000] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.952 13:18:40 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.952 malloc0 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.952 13:18:40 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.952 [2024-12-06 13:18:40.952773] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:23:53.952 [2024-12-06 13:18:40.952823] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:23:53.952 [2024-12-06 13:18:40.952839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:23:53.952 [2024-12-06 13:18:40.960193] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:23:53.952 [2024-12-06 13:18:40.960226] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:23:53.952 [2024-12-06 13:18:40.960239] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:23:53.952 [2024-12-06 13:18:40.960344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:23:53.952 1 00:23:53.952 13:18:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.952 13:18:40 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75940 00:23:54.211 [2024-12-06 13:18:40.968165] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:23:54.211 [2024-12-06 13:18:40.975797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:23:54.211 [2024-12-06 13:18:40.983423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:23:54.211 [2024-12-06 13:18:40.983457] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:24:50.424 00:24:50.424 fio_test: (groupid=0, jobs=1): err= 0: pid=75943: Fri Dec 6 13:19:29 2024 00:24:50.424 read: IOPS=18.2k, BW=71.2MiB/s (74.7MB/s)(4274MiB/60002msec) 00:24:50.424 slat (nsec): min=1820, max=3839.2k, avg=6032.59, stdev=4495.88 00:24:50.424 clat (usec): min=767, max=6534.0k, avg=3392.63, stdev=45870.86 00:24:50.424 lat (usec): min=773, max=6534.0k, avg=3398.66, stdev=45870.87 00:24:50.424 clat percentiles (usec): 00:24:50.424 | 1.00th=[ 2573], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2868], 00:24:50.424 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:24:50.424 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3982], 00:24:50.424 | 99.00th=[ 5669], 99.50th=[ 6456], 99.90th=[ 7767], 99.95th=[ 8717], 00:24:50.424 | 99.99th=[13304] 00:24:50.424 bw ( KiB/s): min=16352, max=85224, per=100.00%, avg=81124.79, stdev=8731.67, samples=107 00:24:50.424 iops : min= 4088, max=21306, avg=20281.20, stdev=2182.92, samples=107 00:24:50.424 write: IOPS=18.2k, BW=71.2MiB/s (74.7MB/s)(4273MiB/60002msec); 0 zone resets 00:24:50.424 slat (nsec): min=1982, max=2516.2k, avg=6203.91, stdev=3562.15 00:24:50.424 clat (usec): min=767, max=6534.3k, avg=3613.77, stdev=53710.46 00:24:50.424 lat (usec): min=772, max=6534.3k, avg=3619.98, stdev=53710.45 00:24:50.424 clat percentiles (usec): 00:24:50.424 | 1.00th=[ 2606], 5.00th=[ 2868], 10.00th=[ 2933], 20.00th=[ 2999], 00:24:50.424 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3097], 00:24:50.424 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 3851], 00:24:50.424 | 99.00th=[ 5669], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 8717], 00:24:50.424 | 99.99th=[13435] 00:24:50.424 bw ( KiB/s): min=16304, max=85248, per=100.00%, avg=81080.15, stdev=8718.31, samples=107 00:24:50.424 iops : min= 4076, max=21312, avg=20270.04, stdev=2179.58, samples=107 00:24:50.424 lat (usec) : 1000=0.01% 00:24:50.424 lat (msec) : 2=0.07%, 4=95.16%, 10=4.73%, 20=0.03%, >=2000=0.01% 00:24:50.424 cpu : usr=9.25%, sys=20.90%, ctx=69121, majf=0, minf=13 00:24:50.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:24:50.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:50.424 issued rwts: total=1094250,1093774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:50.424 00:24:50.424 Run status group 0 (all jobs): 00:24:50.424 READ: bw=71.2MiB/s (74.7MB/s), 71.2MiB/s-71.2MiB/s (74.7MB/s-74.7MB/s), io=4274MiB (4482MB), run=60002-60002msec 00:24:50.424 WRITE: bw=71.2MiB/s (74.7MB/s), 71.2MiB/s-71.2MiB/s (74.7MB/s-74.7MB/s), io=4273MiB (4480MB), run=60002-60002msec 00:24:50.424 00:24:50.424 Disk stats (read/write): 00:24:50.424 ublkb1: ios=1091912/1091437, merge=0/0, ticks=3614114/3737009, in_queue=7351124, util=99.92% 00:24:50.424 13:19:29 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.424 [2024-12-06 13:19:29.721297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:50.424 [2024-12-06 13:19:29.759197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:50.424 [2024-12-06 13:19:29.759434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:50.424 [2024-12-06 13:19:29.770276] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:50.424 [2024-12-06 13:19:29.770560] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:50.424 [2024-12-06 13:19:29.770703] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.424 13:19:29 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.424 [2024-12-06 13:19:29.777353] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:50.424 [2024-12-06 13:19:29.785194] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:50.424 [2024-12-06 13:19:29.785253] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.424 13:19:29 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:24:50.424 13:19:29 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:24:50.424 13:19:29 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76040 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76040 ']' 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76040 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76040 00:24:50.424 killing process with pid 76040 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76040' 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76040 00:24:50.424 13:19:29 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76040 00:24:50.424 [2024-12-06 13:19:31.322689] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:50.424 [2024-12-06 13:19:31.322967] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:50.424 ************************************ 00:24:50.424 END TEST ublk_recovery 00:24:50.424 ************************************ 00:24:50.424 00:24:50.424 real 1m5.934s 00:24:50.424 user 1m49.397s 00:24:50.424 sys 0m29.609s 00:24:50.424 13:19:32 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.424 13:19:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.424 13:19:32 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:24:50.424 13:19:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:50.424 13:19:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.424 13:19:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.424 13:19:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:24:50.424 13:19:32 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:24:50.424 13:19:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:50.424 13:19:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.424 13:19:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.424 ************************************ 00:24:50.424 START TEST ftl 00:24:50.424 ************************************ 00:24:50.424 13:19:32 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:24:50.424 * Looking for test storage... 00:24:50.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.424 13:19:32 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:50.424 13:19:32 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:24:50.424 13:19:32 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:50.425 13:19:32 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.425 13:19:32 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.425 13:19:32 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.425 13:19:32 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.425 13:19:32 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.425 13:19:32 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.425 13:19:32 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.425 13:19:32 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.425 13:19:32 ftl -- scripts/common.sh@344 -- # case "$op" in 00:24:50.425 13:19:32 ftl -- scripts/common.sh@345 -- # : 1 00:24:50.425 13:19:32 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.425 13:19:32 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.425 13:19:32 ftl -- scripts/common.sh@365 -- # decimal 1 00:24:50.425 13:19:32 ftl -- scripts/common.sh@353 -- # local d=1 00:24:50.425 13:19:32 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.425 13:19:32 ftl -- scripts/common.sh@355 -- # echo 1 00:24:50.425 13:19:32 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.425 13:19:32 ftl -- scripts/common.sh@366 -- # decimal 2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@353 -- # local d=2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.425 13:19:32 ftl -- scripts/common.sh@355 -- # echo 2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.425 13:19:32 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.425 13:19:32 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.425 13:19:32 ftl -- scripts/common.sh@368 -- # return 0 00:24:50.425 13:19:32 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.425 13:19:32 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.425 --rc genhtml_branch_coverage=1 00:24:50.425 --rc genhtml_function_coverage=1 00:24:50.425 --rc genhtml_legend=1 00:24:50.425 --rc geninfo_all_blocks=1 00:24:50.425 --rc geninfo_unexecuted_blocks=1 00:24:50.425 00:24:50.425 ' 00:24:50.425 13:19:32 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.425 --rc genhtml_branch_coverage=1 00:24:50.425 --rc genhtml_function_coverage=1 00:24:50.425 --rc genhtml_legend=1 00:24:50.425 --rc geninfo_all_blocks=1 00:24:50.425 --rc geninfo_unexecuted_blocks=1 00:24:50.425 00:24:50.425 ' 00:24:50.425 13:19:32 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.425 --rc genhtml_branch_coverage=1 00:24:50.425 --rc genhtml_function_coverage=1 00:24:50.425 --rc genhtml_legend=1 00:24:50.425 --rc geninfo_all_blocks=1 00:24:50.425 --rc geninfo_unexecuted_blocks=1 00:24:50.425 00:24:50.425 ' 00:24:50.425 13:19:32 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.425 --rc genhtml_branch_coverage=1 00:24:50.425 --rc genhtml_function_coverage=1 00:24:50.425 --rc genhtml_legend=1 00:24:50.425 --rc geninfo_all_blocks=1 00:24:50.425 --rc geninfo_unexecuted_blocks=1 00:24:50.425 00:24:50.425 ' 00:24:50.425 13:19:32 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:50.425 13:19:32 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:24:50.425 13:19:32 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.425 13:19:32 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.425 13:19:32 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:50.425 13:19:32 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:50.425 13:19:32 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.425 13:19:32 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:50.425 13:19:32 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:50.425 13:19:32 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.425 13:19:32 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.425 13:19:32 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:50.425 13:19:32 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:50.425 13:19:32 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:50.425 13:19:32 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:50.425 13:19:32 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:50.425 13:19:32 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:50.425 13:19:32 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.425 13:19:32 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.425 13:19:32 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:50.425 13:19:32 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:50.425 13:19:32 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:50.425 13:19:32 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:50.425 13:19:32 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:50.425 13:19:32 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:50.425 13:19:32 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:50.425 13:19:32 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:50.425 13:19:32 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.425 13:19:32 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.425 13:19:32 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.425 13:19:32 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:24:50.425 13:19:32 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:24:50.425 13:19:32 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:24:50.425 13:19:32 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:24:50.425 13:19:32 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:50.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:50.425 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:50.425 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:50.425 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:50.425 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:50.425 13:19:33 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76836 00:24:50.425 13:19:33 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:24:50.425 13:19:33 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76836 00:24:50.425 13:19:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 76836 ']' 00:24:50.425 13:19:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.425 13:19:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.425 13:19:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.425 13:19:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.425 13:19:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:50.425 [2024-12-06 13:19:33.610891] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:24:50.425 [2024-12-06 13:19:33.611427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76836 ] 00:24:50.425 [2024-12-06 13:19:33.800442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.425 [2024-12-06 13:19:33.965650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.425 13:19:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.425 13:19:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:24:50.425 13:19:34 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:24:50.425 13:19:34 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:24:50.425 13:19:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:24:50.425 13:19:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@50 -- # break 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:24:50.425 13:19:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:24:50.425 13:19:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:24:50.425 13:19:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:24:50.425 13:19:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:24:50.425 13:19:37 ftl -- ftl/ftl.sh@63 -- # break 00:24:50.425 13:19:37 ftl -- ftl/ftl.sh@66 -- # killprocess 76836 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 76836 ']' 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@958 -- # kill -0 76836 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@959 -- # uname 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76836 00:24:50.425 killing process with pid 76836 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76836' 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@973 -- # kill 76836 00:24:50.425 13:19:37 ftl -- common/autotest_common.sh@978 -- # wait 76836 00:24:52.987 13:19:39 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:24:52.987 13:19:39 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:24:52.987 13:19:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:52.987 13:19:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.987 13:19:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:52.987 ************************************ 00:24:52.987 START TEST ftl_fio_basic 00:24:52.987 ************************************ 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:24:52.987 * Looking for test storage... 00:24:52.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:52.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.987 --rc genhtml_branch_coverage=1 00:24:52.987 --rc genhtml_function_coverage=1 00:24:52.987 --rc genhtml_legend=1 00:24:52.987 --rc geninfo_all_blocks=1 00:24:52.987 --rc geninfo_unexecuted_blocks=1 00:24:52.987 00:24:52.987 ' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:52.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.987 --rc genhtml_branch_coverage=1 00:24:52.987 --rc genhtml_function_coverage=1 00:24:52.987 --rc genhtml_legend=1 00:24:52.987 --rc geninfo_all_blocks=1 00:24:52.987 --rc geninfo_unexecuted_blocks=1 00:24:52.987 00:24:52.987 ' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:52.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.987 --rc genhtml_branch_coverage=1 00:24:52.987 --rc genhtml_function_coverage=1 00:24:52.987 --rc genhtml_legend=1 00:24:52.987 --rc geninfo_all_blocks=1 00:24:52.987 --rc geninfo_unexecuted_blocks=1 00:24:52.987 00:24:52.987 ' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:52.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.987 --rc genhtml_branch_coverage=1 00:24:52.987 --rc genhtml_function_coverage=1 00:24:52.987 --rc genhtml_legend=1 00:24:52.987 --rc geninfo_all_blocks=1 00:24:52.987 --rc geninfo_unexecuted_blocks=1 00:24:52.987 00:24:52.987 ' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:24:52.987 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76985 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76985 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76985 ']' 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.988 13:19:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:52.988 [2024-12-06 13:19:39.757946] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:24:52.988 [2024-12-06 13:19:39.758107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76985 ] 00:24:52.988 [2024-12-06 13:19:39.934701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:53.245 [2024-12-06 13:19:40.075408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.245 [2024-12-06 13:19:40.075588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.245 [2024-12-06 13:19:40.075615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:24:54.175 13:19:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:54.432 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:54.691 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:54.691 { 00:24:54.691 "name": "nvme0n1", 00:24:54.691 "aliases": [ 00:24:54.691 "43599b70-9c58-455b-b5d8-8ba03b5a8a81" 00:24:54.691 ], 00:24:54.691 "product_name": "NVMe disk", 00:24:54.691 "block_size": 4096, 00:24:54.691 "num_blocks": 1310720, 00:24:54.691 "uuid": "43599b70-9c58-455b-b5d8-8ba03b5a8a81", 00:24:54.691 "numa_id": -1, 00:24:54.691 "assigned_rate_limits": { 00:24:54.691 "rw_ios_per_sec": 0, 00:24:54.691 "rw_mbytes_per_sec": 0, 00:24:54.691 "r_mbytes_per_sec": 0, 00:24:54.691 "w_mbytes_per_sec": 0 00:24:54.691 }, 00:24:54.691 "claimed": false, 00:24:54.691 "zoned": false, 00:24:54.691 "supported_io_types": { 00:24:54.691 "read": true, 00:24:54.691 "write": true, 00:24:54.691 "unmap": true, 00:24:54.691 "flush": true, 00:24:54.691 "reset": true, 00:24:54.691 "nvme_admin": true, 00:24:54.691 "nvme_io": true, 00:24:54.691 "nvme_io_md": false, 00:24:54.691 "write_zeroes": true, 00:24:54.691 "zcopy": false, 00:24:54.691 "get_zone_info": false, 00:24:54.691 "zone_management": false, 00:24:54.691 "zone_append": false, 00:24:54.691 "compare": true, 00:24:54.691 "compare_and_write": false, 00:24:54.691 "abort": true, 00:24:54.691 "seek_hole": false, 00:24:54.691 "seek_data": false, 00:24:54.691 "copy": true, 00:24:54.691 "nvme_iov_md": false 00:24:54.691 }, 00:24:54.691 "driver_specific": { 00:24:54.691 "nvme": [ 00:24:54.691 { 00:24:54.691 "pci_address": "0000:00:11.0", 00:24:54.691 "trid": { 00:24:54.691 "trtype": "PCIe", 00:24:54.691 "traddr": "0000:00:11.0" 00:24:54.691 }, 00:24:54.691 "ctrlr_data": { 00:24:54.691 "cntlid": 0, 00:24:54.691 "vendor_id": "0x1b36", 00:24:54.691 "model_number": "QEMU NVMe Ctrl", 00:24:54.691 "serial_number": "12341", 00:24:54.691 "firmware_revision": "8.0.0", 00:24:54.691 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:54.691 "oacs": { 00:24:54.691 "security": 0, 00:24:54.691 "format": 1, 00:24:54.691 "firmware": 0, 00:24:54.691 "ns_manage": 1 00:24:54.691 }, 00:24:54.691 "multi_ctrlr": false, 00:24:54.691 "ana_reporting": false 00:24:54.691 }, 00:24:54.691 "vs": { 00:24:54.691 "nvme_version": "1.4" 00:24:54.691 }, 00:24:54.691 "ns_data": { 00:24:54.691 "id": 1, 00:24:54.691 "can_share": false 00:24:54.691 } 00:24:54.691 } 00:24:54.691 ], 00:24:54.691 "mp_policy": "active_passive" 00:24:54.691 } 00:24:54.691 } 00:24:54.691 ]' 00:24:54.691 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:54.691 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:54.953 13:19:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:55.217 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:24:55.217 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:55.473 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=32aca52f-86f8-4eb0-b7ba-e39e9db8f127 00:24:55.473 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 32aca52f-86f8-4eb0-b7ba-e39e9db8f127 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:55.731 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:55.991 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:55.991 { 00:24:55.991 "name": "b42fa1fc-f7ff-4352-b0d2-3b63670e4959", 00:24:55.991 "aliases": [ 00:24:55.991 "lvs/nvme0n1p0" 00:24:55.991 ], 00:24:55.991 "product_name": "Logical Volume", 00:24:55.991 "block_size": 4096, 00:24:55.991 "num_blocks": 26476544, 00:24:55.991 "uuid": "b42fa1fc-f7ff-4352-b0d2-3b63670e4959", 00:24:55.991 "assigned_rate_limits": { 00:24:55.991 "rw_ios_per_sec": 0, 00:24:55.991 "rw_mbytes_per_sec": 0, 00:24:55.991 "r_mbytes_per_sec": 0, 00:24:55.991 "w_mbytes_per_sec": 0 00:24:55.991 }, 00:24:55.991 "claimed": false, 00:24:55.991 "zoned": false, 00:24:55.991 "supported_io_types": { 00:24:55.991 "read": true, 00:24:55.991 "write": true, 00:24:55.991 "unmap": true, 00:24:55.991 "flush": false, 00:24:55.991 "reset": true, 00:24:55.991 "nvme_admin": false, 00:24:55.991 "nvme_io": false, 00:24:55.991 "nvme_io_md": false, 00:24:55.991 "write_zeroes": true, 00:24:55.991 "zcopy": false, 00:24:55.991 "get_zone_info": false, 00:24:55.991 "zone_management": false, 00:24:55.991 "zone_append": false, 00:24:55.991 "compare": false, 00:24:55.991 "compare_and_write": false, 00:24:55.991 "abort": false, 00:24:55.991 "seek_hole": true, 00:24:55.991 "seek_data": true, 00:24:55.991 "copy": false, 00:24:55.991 "nvme_iov_md": false 00:24:55.991 }, 00:24:55.991 "driver_specific": { 00:24:55.991 "lvol": { 00:24:55.991 "lvol_store_uuid": "32aca52f-86f8-4eb0-b7ba-e39e9db8f127", 00:24:55.991 "base_bdev": "nvme0n1", 00:24:55.991 "thin_provision": true, 00:24:55.991 "num_allocated_clusters": 0, 00:24:55.991 "snapshot": false, 00:24:55.991 "clone": false, 00:24:55.991 "esnap_clone": false 00:24:55.991 } 00:24:55.991 } 00:24:55.991 } 00:24:55.991 ]' 00:24:55.991 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:55.991 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:55.991 13:19:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:56.249 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:56.249 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:56.249 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:24:56.249 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:24:56.249 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:24:56.249 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:56.506 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:56.772 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:56.772 { 00:24:56.772 "name": "b42fa1fc-f7ff-4352-b0d2-3b63670e4959", 00:24:56.772 "aliases": [ 00:24:56.772 "lvs/nvme0n1p0" 00:24:56.772 ], 00:24:56.772 "product_name": "Logical Volume", 00:24:56.772 "block_size": 4096, 00:24:56.772 "num_blocks": 26476544, 00:24:56.772 "uuid": "b42fa1fc-f7ff-4352-b0d2-3b63670e4959", 00:24:56.772 "assigned_rate_limits": { 00:24:56.772 "rw_ios_per_sec": 0, 00:24:56.772 "rw_mbytes_per_sec": 0, 00:24:56.772 "r_mbytes_per_sec": 0, 00:24:56.772 "w_mbytes_per_sec": 0 00:24:56.772 }, 00:24:56.772 "claimed": false, 00:24:56.772 "zoned": false, 00:24:56.772 "supported_io_types": { 00:24:56.772 "read": true, 00:24:56.772 "write": true, 00:24:56.772 "unmap": true, 00:24:56.772 "flush": false, 00:24:56.772 "reset": true, 00:24:56.772 "nvme_admin": false, 00:24:56.772 "nvme_io": false, 00:24:56.772 "nvme_io_md": false, 00:24:56.772 "write_zeroes": true, 00:24:56.772 "zcopy": false, 00:24:56.772 "get_zone_info": false, 00:24:56.772 "zone_management": false, 00:24:56.772 "zone_append": false, 00:24:56.772 "compare": false, 00:24:56.772 "compare_and_write": false, 00:24:56.772 "abort": false, 00:24:56.772 "seek_hole": true, 00:24:56.772 "seek_data": true, 00:24:56.772 "copy": false, 00:24:56.772 "nvme_iov_md": false 00:24:56.772 }, 00:24:56.772 "driver_specific": { 00:24:56.772 "lvol": { 00:24:56.772 "lvol_store_uuid": "32aca52f-86f8-4eb0-b7ba-e39e9db8f127", 00:24:56.772 "base_bdev": "nvme0n1", 00:24:56.772 "thin_provision": true, 00:24:56.772 "num_allocated_clusters": 0, 00:24:56.772 "snapshot": false, 00:24:56.772 "clone": false, 00:24:56.772 "esnap_clone": false 00:24:56.772 } 00:24:56.772 } 00:24:56.772 } 00:24:56.772 ]' 00:24:56.772 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:57.043 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:57.043 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:57.043 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:57.043 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:57.043 13:19:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:24:57.043 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:24:57.043 13:19:43 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:24:57.300 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:57.300 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b42fa1fc-f7ff-4352-b0d2-3b63670e4959 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:57.557 { 00:24:57.557 "name": "b42fa1fc-f7ff-4352-b0d2-3b63670e4959", 00:24:57.557 "aliases": [ 00:24:57.557 "lvs/nvme0n1p0" 00:24:57.557 ], 00:24:57.557 "product_name": "Logical Volume", 00:24:57.557 "block_size": 4096, 00:24:57.557 "num_blocks": 26476544, 00:24:57.557 "uuid": "b42fa1fc-f7ff-4352-b0d2-3b63670e4959", 00:24:57.557 "assigned_rate_limits": { 00:24:57.557 "rw_ios_per_sec": 0, 00:24:57.557 "rw_mbytes_per_sec": 0, 00:24:57.557 "r_mbytes_per_sec": 0, 00:24:57.557 "w_mbytes_per_sec": 0 00:24:57.557 }, 00:24:57.557 "claimed": false, 00:24:57.557 "zoned": false, 00:24:57.557 "supported_io_types": { 00:24:57.557 "read": true, 00:24:57.557 "write": true, 00:24:57.557 "unmap": true, 00:24:57.557 "flush": false, 00:24:57.557 "reset": true, 00:24:57.557 "nvme_admin": false, 00:24:57.557 "nvme_io": false, 00:24:57.557 "nvme_io_md": false, 00:24:57.557 "write_zeroes": true, 00:24:57.557 "zcopy": false, 00:24:57.557 "get_zone_info": false, 00:24:57.557 "zone_management": false, 00:24:57.557 "zone_append": false, 00:24:57.557 "compare": false, 00:24:57.557 "compare_and_write": false, 00:24:57.557 "abort": false, 00:24:57.557 "seek_hole": true, 00:24:57.557 "seek_data": true, 00:24:57.557 "copy": false, 00:24:57.557 "nvme_iov_md": false 00:24:57.557 }, 00:24:57.557 "driver_specific": { 00:24:57.557 "lvol": { 00:24:57.557 "lvol_store_uuid": "32aca52f-86f8-4eb0-b7ba-e39e9db8f127", 00:24:57.557 "base_bdev": "nvme0n1", 00:24:57.557 "thin_provision": true, 00:24:57.557 "num_allocated_clusters": 0, 00:24:57.557 "snapshot": false, 00:24:57.557 "clone": false, 00:24:57.557 "esnap_clone": false 00:24:57.557 } 00:24:57.557 } 00:24:57.557 } 00:24:57.557 ]' 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:24:57.557 13:19:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b42fa1fc-f7ff-4352-b0d2-3b63670e4959 -c nvc0n1p0 --l2p_dram_limit 60 00:24:57.818 [2024-12-06 13:19:44.810001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.818 [2024-12-06 13:19:44.810073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:57.818 [2024-12-06 13:19:44.810102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:57.818 [2024-12-06 13:19:44.810115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.818 [2024-12-06 13:19:44.810246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.818 [2024-12-06 13:19:44.810273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.818 [2024-12-06 13:19:44.810297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:57.818 [2024-12-06 13:19:44.810310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.818 [2024-12-06 13:19:44.810368] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:57.818 [2024-12-06 13:19:44.811395] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:57.818 [2024-12-06 13:19:44.811584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.818 [2024-12-06 13:19:44.811606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.818 [2024-12-06 13:19:44.811623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:24:57.818 [2024-12-06 13:19:44.811635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.818 [2024-12-06 13:19:44.811795] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 85458d7e-867b-4430-a5b8-f296568fd33a 00:24:57.818 [2024-12-06 13:19:44.813941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.818 [2024-12-06 13:19:44.814013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:57.818 [2024-12-06 13:19:44.814042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:57.818 [2024-12-06 13:19:44.814074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.818 [2024-12-06 13:19:44.824744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.818 [2024-12-06 13:19:44.824823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.818 [2024-12-06 13:19:44.824846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.501 ms 00:24:57.819 [2024-12-06 13:19:44.824863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.819 [2024-12-06 13:19:44.825057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.819 [2024-12-06 13:19:44.825085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.819 [2024-12-06 13:19:44.825099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:24:57.819 [2024-12-06 13:19:44.825120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.819 [2024-12-06 13:19:44.825288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.819 [2024-12-06 13:19:44.825314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:57.819 [2024-12-06 13:19:44.825345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:57.819 [2024-12-06 13:19:44.825361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.819 [2024-12-06 13:19:44.825406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:57.819 [2024-12-06 13:19:44.830945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.819 [2024-12-06 13:19:44.830991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.819 [2024-12-06 13:19:44.831019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.543 ms 00:24:57.819 [2024-12-06 13:19:44.831039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.819 [2024-12-06 13:19:44.831109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.819 [2024-12-06 13:19:44.831154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:57.819 [2024-12-06 13:19:44.831175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:57.819 [2024-12-06 13:19:44.831187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.819 [2024-12-06 13:19:44.831275] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:58.077 [2024-12-06 13:19:44.831491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:58.077 [2024-12-06 13:19:44.831531] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:58.077 [2024-12-06 13:19:44.831549] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:58.077 [2024-12-06 13:19:44.831573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:58.078 [2024-12-06 13:19:44.831590] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:58.078 [2024-12-06 13:19:44.831609] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:58.078 [2024-12-06 13:19:44.831621] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:58.078 [2024-12-06 13:19:44.831635] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:58.078 [2024-12-06 13:19:44.831647] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:58.078 [2024-12-06 13:19:44.831664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.078 [2024-12-06 13:19:44.831679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:58.078 [2024-12-06 13:19:44.831699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:24:58.078 [2024-12-06 13:19:44.831721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.078 [2024-12-06 13:19:44.831842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.078 [2024-12-06 13:19:44.831859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:58.078 [2024-12-06 13:19:44.831878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:58.078 [2024-12-06 13:19:44.831891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.078 [2024-12-06 13:19:44.832035] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:58.078 [2024-12-06 13:19:44.832053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:58.078 [2024-12-06 13:19:44.832081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:58.078 [2024-12-06 13:19:44.832140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:58.078 [2024-12-06 13:19:44.832197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.078 [2024-12-06 13:19:44.832228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:58.078 [2024-12-06 13:19:44.832241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:58.078 [2024-12-06 13:19:44.832258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.078 [2024-12-06 13:19:44.832271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:58.078 [2024-12-06 13:19:44.832293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:58.078 [2024-12-06 13:19:44.832306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:58.078 [2024-12-06 13:19:44.832341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:58.078 [2024-12-06 13:19:44.832391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:58.078 [2024-12-06 13:19:44.832442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:58.078 [2024-12-06 13:19:44.832494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:58.078 [2024-12-06 13:19:44.832536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:58.078 [2024-12-06 13:19:44.832583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.078 [2024-12-06 13:19:44.832634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:58.078 [2024-12-06 13:19:44.832647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:58.078 [2024-12-06 13:19:44.832661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.078 [2024-12-06 13:19:44.832673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:58.078 [2024-12-06 13:19:44.832687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:58.078 [2024-12-06 13:19:44.832699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:58.078 [2024-12-06 13:19:44.832725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:58.078 [2024-12-06 13:19:44.832739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832751] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:58.078 [2024-12-06 13:19:44.832767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:58.078 [2024-12-06 13:19:44.832780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.078 [2024-12-06 13:19:44.832808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:58.078 [2024-12-06 13:19:44.832825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:58.078 [2024-12-06 13:19:44.832837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:58.078 [2024-12-06 13:19:44.832851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:58.078 [2024-12-06 13:19:44.832863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:58.078 [2024-12-06 13:19:44.832877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:58.078 [2024-12-06 13:19:44.832897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:58.078 [2024-12-06 13:19:44.832916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.078 [2024-12-06 13:19:44.832929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:58.078 [2024-12-06 13:19:44.832944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:58.078 [2024-12-06 13:19:44.832956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:58.078 [2024-12-06 13:19:44.832970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:58.078 [2024-12-06 13:19:44.832989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:58.078 [2024-12-06 13:19:44.833005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:58.078 [2024-12-06 13:19:44.833017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:58.078 [2024-12-06 13:19:44.833036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:58.078 [2024-12-06 13:19:44.833048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:58.078 [2024-12-06 13:19:44.833065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:58.078 [2024-12-06 13:19:44.833077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:58.078 [2024-12-06 13:19:44.833091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:58.078 [2024-12-06 13:19:44.833103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:58.078 [2024-12-06 13:19:44.833118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:58.078 [2024-12-06 13:19:44.833144] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:58.078 [2024-12-06 13:19:44.833162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.078 [2024-12-06 13:19:44.833179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:58.078 [2024-12-06 13:19:44.833194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:58.078 [2024-12-06 13:19:44.833206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:58.078 [2024-12-06 13:19:44.833221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:58.078 [2024-12-06 13:19:44.833234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.078 [2024-12-06 13:19:44.833249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:58.078 [2024-12-06 13:19:44.833262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:24:58.078 [2024-12-06 13:19:44.833277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.078 [2024-12-06 13:19:44.833359] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:58.078 [2024-12-06 13:19:44.833384] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:02.261 [2024-12-06 13:19:48.412561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.412897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:02.261 [2024-12-06 13:19:48.413043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3579.220 ms 00:25:02.261 [2024-12-06 13:19:48.413202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.453045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.453387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:02.261 [2024-12-06 13:19:48.453526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.528 ms 00:25:02.261 [2024-12-06 13:19:48.453593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.454021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.454198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:02.261 [2024-12-06 13:19:48.454345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:02.261 [2024-12-06 13:19:48.454410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.508543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.508837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:02.261 [2024-12-06 13:19:48.508979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.912 ms 00:25:02.261 [2024-12-06 13:19:48.509041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.509167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.509324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:02.261 [2024-12-06 13:19:48.509379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:02.261 [2024-12-06 13:19:48.509424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.510159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.510336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:02.261 [2024-12-06 13:19:48.510454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:25:02.261 [2024-12-06 13:19:48.510515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.510744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.510805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:02.261 [2024-12-06 13:19:48.510908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:25:02.261 [2024-12-06 13:19:48.510966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.533383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.533606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:02.261 [2024-12-06 13:19:48.533730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.238 ms 00:25:02.261 [2024-12-06 13:19:48.533880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.548726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:02.261 [2024-12-06 13:19:48.571216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.571519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:02.261 [2024-12-06 13:19:48.571670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.101 ms 00:25:02.261 [2024-12-06 13:19:48.571725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.646579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.646837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:02.261 [2024-12-06 13:19:48.647018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.747 ms 00:25:02.261 [2024-12-06 13:19:48.647044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.261 [2024-12-06 13:19:48.647334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.261 [2024-12-06 13:19:48.647363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:02.261 [2024-12-06 13:19:48.647386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:25:02.261 [2024-12-06 13:19:48.647400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.679312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.679366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:02.262 [2024-12-06 13:19:48.679405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.823 ms 00:25:02.262 [2024-12-06 13:19:48.679419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.709988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.710049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:02.262 [2024-12-06 13:19:48.710090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.502 ms 00:25:02.262 [2024-12-06 13:19:48.710102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.711013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.711065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:02.262 [2024-12-06 13:19:48.711102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:25:02.262 [2024-12-06 13:19:48.711128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.814131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.814256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:02.262 [2024-12-06 13:19:48.814290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.877 ms 00:25:02.262 [2024-12-06 13:19:48.814309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.847983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.848055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:02.262 [2024-12-06 13:19:48.848096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.544 ms 00:25:02.262 [2024-12-06 13:19:48.848110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.879757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.879808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:02.262 [2024-12-06 13:19:48.879846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.550 ms 00:25:02.262 [2024-12-06 13:19:48.879859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.911170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.911220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:02.262 [2024-12-06 13:19:48.911243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.247 ms 00:25:02.262 [2024-12-06 13:19:48.911256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.911333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.911354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:02.262 [2024-12-06 13:19:48.911378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:02.262 [2024-12-06 13:19:48.911390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.911566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.262 [2024-12-06 13:19:48.911590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:02.262 [2024-12-06 13:19:48.911607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:02.262 [2024-12-06 13:19:48.911620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.262 [2024-12-06 13:19:48.913112] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4102.493 ms, result 0 00:25:02.262 { 00:25:02.262 "name": "ftl0", 00:25:02.262 "uuid": "85458d7e-867b-4430-a5b8-f296568fd33a" 00:25:02.262 } 00:25:02.262 13:19:48 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:25:02.262 13:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:02.262 13:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:02.262 13:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:25:02.262 13:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:02.262 13:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:02.262 13:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:02.262 13:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:02.595 [ 00:25:02.595 { 00:25:02.595 "name": "ftl0", 00:25:02.595 "aliases": [ 00:25:02.595 "85458d7e-867b-4430-a5b8-f296568fd33a" 00:25:02.595 ], 00:25:02.595 "product_name": "FTL disk", 00:25:02.595 "block_size": 4096, 00:25:02.595 "num_blocks": 20971520, 00:25:02.595 "uuid": "85458d7e-867b-4430-a5b8-f296568fd33a", 00:25:02.595 "assigned_rate_limits": { 00:25:02.595 "rw_ios_per_sec": 0, 00:25:02.595 "rw_mbytes_per_sec": 0, 00:25:02.595 "r_mbytes_per_sec": 0, 00:25:02.595 "w_mbytes_per_sec": 0 00:25:02.595 }, 00:25:02.595 "claimed": false, 00:25:02.595 "zoned": false, 00:25:02.595 "supported_io_types": { 00:25:02.595 "read": true, 00:25:02.595 "write": true, 00:25:02.595 "unmap": true, 00:25:02.595 "flush": true, 00:25:02.595 "reset": false, 00:25:02.595 "nvme_admin": false, 00:25:02.595 "nvme_io": false, 00:25:02.596 "nvme_io_md": false, 00:25:02.596 "write_zeroes": true, 00:25:02.596 "zcopy": false, 00:25:02.596 "get_zone_info": false, 00:25:02.596 "zone_management": false, 00:25:02.596 "zone_append": false, 00:25:02.596 "compare": false, 00:25:02.596 "compare_and_write": false, 00:25:02.596 "abort": false, 00:25:02.596 "seek_hole": false, 00:25:02.596 "seek_data": false, 00:25:02.596 "copy": false, 00:25:02.596 "nvme_iov_md": false 00:25:02.596 }, 00:25:02.596 "driver_specific": { 00:25:02.596 "ftl": { 00:25:02.596 "base_bdev": "b42fa1fc-f7ff-4352-b0d2-3b63670e4959", 00:25:02.596 "cache": "nvc0n1p0" 00:25:02.596 } 00:25:02.596 } 00:25:02.596 } 00:25:02.596 ] 00:25:02.596 13:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:25:02.596 13:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:25:02.596 13:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:03.163 13:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:25:03.163 13:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:03.163 [2024-12-06 13:19:50.110281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.110533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:03.163 [2024-12-06 13:19:50.110569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:03.163 [2024-12-06 13:19:50.110593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.163 [2024-12-06 13:19:50.110657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:03.163 [2024-12-06 13:19:50.114420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.114460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:03.163 [2024-12-06 13:19:50.114483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.729 ms 00:25:03.163 [2024-12-06 13:19:50.114514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.163 [2024-12-06 13:19:50.115057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.115084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:03.163 [2024-12-06 13:19:50.115118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:25:03.163 [2024-12-06 13:19:50.115130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.163 [2024-12-06 13:19:50.118307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.118344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:03.163 [2024-12-06 13:19:50.118363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:25:03.163 [2024-12-06 13:19:50.118376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.163 [2024-12-06 13:19:50.125174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.125368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:03.163 [2024-12-06 13:19:50.125404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.763 ms 00:25:03.163 [2024-12-06 13:19:50.125417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.163 [2024-12-06 13:19:50.156921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.156968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:03.163 [2024-12-06 13:19:50.157024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.393 ms 00:25:03.163 [2024-12-06 13:19:50.157036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.163 [2024-12-06 13:19:50.175879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.175928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:03.163 [2024-12-06 13:19:50.175954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.779 ms 00:25:03.163 [2024-12-06 13:19:50.175979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.163 [2024-12-06 13:19:50.176238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.163 [2024-12-06 13:19:50.176262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:03.163 [2024-12-06 13:19:50.176279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:25:03.163 [2024-12-06 13:19:50.176292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.423 [2024-12-06 13:19:50.207278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.423 [2024-12-06 13:19:50.207338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:03.423 [2024-12-06 13:19:50.207375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.948 ms 00:25:03.423 [2024-12-06 13:19:50.207388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.423 [2024-12-06 13:19:50.238069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.423 [2024-12-06 13:19:50.238119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:03.423 [2024-12-06 13:19:50.238158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.622 ms 00:25:03.423 [2024-12-06 13:19:50.238171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.423 [2024-12-06 13:19:50.268591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.423 [2024-12-06 13:19:50.268640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:03.423 [2024-12-06 13:19:50.268662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.345 ms 00:25:03.423 [2024-12-06 13:19:50.268675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.423 [2024-12-06 13:19:50.299289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.423 [2024-12-06 13:19:50.299336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:03.423 [2024-12-06 13:19:50.299372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.460 ms 00:25:03.423 [2024-12-06 13:19:50.299384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.423 [2024-12-06 13:19:50.299447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:03.423 [2024-12-06 13:19:50.299471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.299990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:03.423 [2024-12-06 13:19:50.300561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.300989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:03.424 [2024-12-06 13:19:50.301116] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:03.424 [2024-12-06 13:19:50.301144] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85458d7e-867b-4430-a5b8-f296568fd33a 00:25:03.424 [2024-12-06 13:19:50.301159] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:03.424 [2024-12-06 13:19:50.301176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:03.424 [2024-12-06 13:19:50.301187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:03.424 [2024-12-06 13:19:50.301219] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:03.424 [2024-12-06 13:19:50.301231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:03.424 [2024-12-06 13:19:50.301245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:03.424 [2024-12-06 13:19:50.301256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:03.424 [2024-12-06 13:19:50.301270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:03.424 [2024-12-06 13:19:50.301280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:03.424 [2024-12-06 13:19:50.301295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.424 [2024-12-06 13:19:50.301307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:03.424 [2024-12-06 13:19:50.301323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.852 ms 00:25:03.424 [2024-12-06 13:19:50.301335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.424 [2024-12-06 13:19:50.318917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.424 [2024-12-06 13:19:50.318963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:03.424 [2024-12-06 13:19:50.318999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.486 ms 00:25:03.424 [2024-12-06 13:19:50.319011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.424 [2024-12-06 13:19:50.319548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.424 [2024-12-06 13:19:50.319570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:03.424 [2024-12-06 13:19:50.319587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:25:03.424 [2024-12-06 13:19:50.319600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.424 [2024-12-06 13:19:50.380218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.424 [2024-12-06 13:19:50.380292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.424 [2024-12-06 13:19:50.380331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.424 [2024-12-06 13:19:50.380345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.424 [2024-12-06 13:19:50.380431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.424 [2024-12-06 13:19:50.380447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.424 [2024-12-06 13:19:50.380463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.424 [2024-12-06 13:19:50.380475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.424 [2024-12-06 13:19:50.380617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.424 [2024-12-06 13:19:50.380641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.424 [2024-12-06 13:19:50.380657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.424 [2024-12-06 13:19:50.380670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.424 [2024-12-06 13:19:50.380713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.424 [2024-12-06 13:19:50.380727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.424 [2024-12-06 13:19:50.380742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.424 [2024-12-06 13:19:50.380753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.492703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.492809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.682 [2024-12-06 13:19:50.492847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.492860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.578904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.579180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.682 [2024-12-06 13:19:50.579220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.579235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.579402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.579423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.682 [2024-12-06 13:19:50.579444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.579456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.579560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.579579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.682 [2024-12-06 13:19:50.579595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.579607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.579779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.579801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.682 [2024-12-06 13:19:50.579817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.579833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.579913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.579932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:03.682 [2024-12-06 13:19:50.579949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.579961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.580023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.580040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.682 [2024-12-06 13:19:50.580056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.580070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.580179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.682 [2024-12-06 13:19:50.580200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.682 [2024-12-06 13:19:50.580217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.682 [2024-12-06 13:19:50.580229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.682 [2024-12-06 13:19:50.580448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.168 ms, result 0 00:25:03.682 true 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76985 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76985 ']' 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76985 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76985 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76985' 00:25:03.682 killing process with pid 76985 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76985 00:25:03.682 13:19:50 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76985 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:08.950 13:19:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:08.950 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:25:08.950 fio-3.35 00:25:08.950 Starting 1 thread 00:25:14.296 00:25:14.296 test: (groupid=0, jobs=1): err= 0: pid=77217: Fri Dec 6 13:20:00 2024 00:25:14.296 read: IOPS=938, BW=62.3MiB/s (65.4MB/s)(255MiB/4083msec) 00:25:14.296 slat (nsec): min=5790, max=38289, avg=7713.65, stdev=3284.04 00:25:14.296 clat (usec): min=330, max=749, avg=471.46, stdev=51.94 00:25:14.296 lat (usec): min=347, max=755, avg=479.17, stdev=52.83 00:25:14.296 clat percentiles (usec): 00:25:14.296 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 404], 20.00th=[ 441], 00:25:14.296 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 474], 00:25:14.296 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 562], 00:25:14.296 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 660], 99.95th=[ 717], 00:25:14.296 | 99.99th=[ 750] 00:25:14.296 write: IOPS=945, BW=62.8MiB/s (65.8MB/s)(256MiB/4078msec); 0 zone resets 00:25:14.296 slat (usec): min=19, max=139, avg=25.26, stdev= 6.27 00:25:14.296 clat (usec): min=393, max=1025, avg=542.52, stdev=65.44 00:25:14.296 lat (usec): min=420, max=1070, avg=567.78, stdev=66.28 00:25:14.296 clat percentiles (usec): 00:25:14.296 | 1.00th=[ 412], 5.00th=[ 461], 10.00th=[ 474], 20.00th=[ 486], 00:25:14.296 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 553], 00:25:14.296 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:25:14.296 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 930], 99.95th=[ 947], 00:25:14.296 | 99.99th=[ 1029] 00:25:14.296 bw ( KiB/s): min=62424, max=67184, per=99.99%, avg=64294.00, stdev=1597.22, samples=8 00:25:14.296 iops : min= 918, max= 988, avg=945.50, stdev=23.49, samples=8 00:25:14.296 lat (usec) : 500=50.66%, 750=48.59%, 1000=0.74% 00:25:14.296 lat (msec) : 2=0.01% 00:25:14.296 cpu : usr=99.02%, sys=0.17%, ctx=9, majf=0, minf=1169 00:25:14.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:14.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.296 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:14.296 00:25:14.296 Run status group 0 (all jobs): 00:25:14.296 READ: bw=62.3MiB/s (65.4MB/s), 62.3MiB/s-62.3MiB/s (65.4MB/s-65.4MB/s), io=255MiB (267MB), run=4083-4083msec 00:25:14.296 WRITE: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=256MiB (269MB), run=4078-4078msec 00:25:15.685 ----------------------------------------------------- 00:25:15.685 Suppressions used: 00:25:15.685 count bytes template 00:25:15.686 1 5 /usr/src/fio/parse.c 00:25:15.686 1 8 libtcmalloc_minimal.so 00:25:15.686 1 904 libcrypto.so 00:25:15.686 ----------------------------------------------------- 00:25:15.686 00:25:15.686 13:20:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:25:15.686 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.686 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:15.944 13:20:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:16.202 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:16.202 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:16.202 fio-3.35 00:25:16.202 Starting 2 threads 00:25:48.259 00:25:48.259 first_half: (groupid=0, jobs=1): err= 0: pid=77321: Fri Dec 6 13:20:33 2024 00:25:48.259 read: IOPS=2273, BW=9093KiB/s (9311kB/s)(255MiB/28701msec) 00:25:48.259 slat (nsec): min=4825, max=51533, avg=7607.91, stdev=1904.55 00:25:48.259 clat (usec): min=955, max=333539, avg=43561.56, stdev=21036.59 00:25:48.259 lat (usec): min=964, max=333544, avg=43569.16, stdev=21036.80 00:25:48.259 clat percentiles (msec): 00:25:48.259 | 1.00th=[ 9], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], 00:25:48.259 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:25:48.259 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 49], 95.00th=[ 61], 00:25:48.259 | 99.00th=[ 165], 99.50th=[ 190], 99.90th=[ 213], 99.95th=[ 275], 00:25:48.259 | 99.99th=[ 321] 00:25:48.259 write: IOPS=2927, BW=11.4MiB/s (12.0MB/s)(256MiB/22387msec); 0 zone resets 00:25:48.259 slat (usec): min=5, max=566, avg= 9.61, stdev= 6.18 00:25:48.259 clat (usec): min=466, max=109069, avg=12653.67, stdev=22030.99 00:25:48.259 lat (usec): min=478, max=109077, avg=12663.28, stdev=22031.18 00:25:48.259 clat percentiles (usec): 00:25:48.259 | 1.00th=[ 988], 5.00th=[ 1303], 10.00th=[ 1516], 20.00th=[ 1893], 00:25:48.259 | 30.00th=[ 3359], 40.00th=[ 5014], 50.00th=[ 6128], 60.00th=[ 7177], 00:25:48.259 | 70.00th=[ 8586], 80.00th=[ 12780], 90.00th=[ 16909], 95.00th=[ 83362], 00:25:48.259 | 99.00th=[ 99091], 99.50th=[101188], 99.90th=[104334], 99.95th=[105382], 00:25:48.259 | 99.99th=[107480] 00:25:48.259 bw ( KiB/s): min= 88, max=40488, per=95.36%, avg=20163.62, stdev=10489.84, samples=26 00:25:48.259 iops : min= 22, max=10122, avg=5040.88, stdev=2622.47, samples=26 00:25:48.259 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.49% 00:25:48.259 lat (msec) : 2=10.69%, 4=6.03%, 10=20.46%, 20=8.75%, 50=45.33% 00:25:48.259 lat (msec) : 100=6.50%, 250=1.67%, 500=0.03% 00:25:48.259 cpu : usr=99.17%, sys=0.19%, ctx=45, majf=0, minf=5605 00:25:48.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:48.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.259 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:48.259 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:48.259 second_half: (groupid=0, jobs=1): err= 0: pid=77322: Fri Dec 6 13:20:33 2024 00:25:48.259 read: IOPS=2257, BW=9030KiB/s (9246kB/s)(255MiB/28905msec) 00:25:48.259 slat (nsec): min=4853, max=92001, avg=7296.43, stdev=1770.51 00:25:48.259 clat (usec): min=1027, max=338765, avg=42891.70, stdev=23456.73 00:25:48.259 lat (usec): min=1035, max=338773, avg=42899.00, stdev=23456.96 00:25:48.259 clat percentiles (msec): 00:25:48.259 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 39], 00:25:48.259 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:25:48.259 | 70.00th=[ 40], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 59], 00:25:48.259 | 99.00th=[ 167], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 284], 00:25:48.259 | 99.99th=[ 330] 00:25:48.259 write: IOPS=2643, BW=10.3MiB/s (10.8MB/s)(256MiB/24796msec); 0 zone resets 00:25:48.259 slat (usec): min=5, max=572, avg= 9.30, stdev= 4.89 00:25:48.259 clat (usec): min=518, max=108299, avg=13724.70, stdev=23158.57 00:25:48.259 lat (usec): min=544, max=108310, avg=13734.00, stdev=23158.70 00:25:48.259 clat percentiles (usec): 00:25:48.259 | 1.00th=[ 979], 5.00th=[ 1254], 10.00th=[ 1450], 20.00th=[ 1778], 00:25:48.259 | 30.00th=[ 2376], 40.00th=[ 4178], 50.00th=[ 5932], 60.00th=[ 7242], 00:25:48.259 | 70.00th=[ 8848], 80.00th=[ 13960], 90.00th=[ 41157], 95.00th=[ 84411], 00:25:48.259 | 99.00th=[100140], 99.50th=[102237], 99.90th=[105382], 99.95th=[106431], 00:25:48.259 | 99.99th=[107480] 00:25:48.259 bw ( KiB/s): min= 1024, max=42640, per=95.36%, avg=20164.92, stdev=11567.26, samples=26 00:25:48.259 iops : min= 256, max=10660, avg=5041.23, stdev=2891.81, samples=26 00:25:48.259 lat (usec) : 750=0.04%, 1000=0.54% 00:25:48.259 lat (msec) : 2=12.27%, 4=6.72%, 10=17.65%, 20=8.80%, 50=46.57% 00:25:48.259 lat (msec) : 100=5.39%, 250=1.97%, 500=0.05% 00:25:48.259 cpu : usr=99.15%, sys=0.16%, ctx=67, majf=0, minf=5510 00:25:48.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:48.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.259 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:48.259 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:48.259 00:25:48.259 Run status group 0 (all jobs): 00:25:48.259 READ: bw=17.6MiB/s (18.5MB/s), 9030KiB/s-9093KiB/s (9246kB/s-9311kB/s), io=510MiB (534MB), run=28701-28905msec 00:25:48.259 WRITE: bw=20.6MiB/s (21.7MB/s), 10.3MiB/s-11.4MiB/s (10.8MB/s-12.0MB/s), io=512MiB (537MB), run=22387-24796msec 00:25:48.823 ----------------------------------------------------- 00:25:48.823 Suppressions used: 00:25:48.823 count bytes template 00:25:48.823 2 10 /usr/src/fio/parse.c 00:25:48.823 2 192 /usr/src/fio/iolog.c 00:25:48.823 1 8 libtcmalloc_minimal.so 00:25:48.823 1 904 libcrypto.so 00:25:48.823 ----------------------------------------------------- 00:25:48.823 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:48.823 13:20:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:25:49.081 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:49.081 fio-3.35 00:25:49.081 Starting 1 thread 00:26:07.187 00:26:07.187 test: (groupid=0, jobs=1): err= 0: pid=77675: Fri Dec 6 13:20:53 2024 00:26:07.187 read: IOPS=6441, BW=25.2MiB/s (26.4MB/s)(255MiB/10122msec) 00:26:07.187 slat (nsec): min=4710, max=51770, avg=6831.37, stdev=1889.66 00:26:07.187 clat (usec): min=768, max=40113, avg=19859.31, stdev=1128.36 00:26:07.187 lat (usec): min=773, max=40121, avg=19866.15, stdev=1128.38 00:26:07.187 clat percentiles (usec): 00:26:07.187 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19006], 20.00th=[19268], 00:26:07.187 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19792], 60.00th=[19792], 00:26:07.187 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20841], 95.00th=[21627], 00:26:07.187 | 99.00th=[23200], 99.50th=[24249], 99.90th=[30278], 99.95th=[35390], 00:26:07.187 | 99.99th=[39060] 00:26:07.187 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(256MiB/5709msec); 0 zone resets 00:26:07.187 slat (usec): min=5, max=301, avg= 9.28, stdev= 4.83 00:26:07.187 clat (usec): min=628, max=63122, avg=11092.98, stdev=13880.28 00:26:07.187 lat (usec): min=636, max=63129, avg=11102.27, stdev=13880.29 00:26:07.187 clat percentiles (usec): 00:26:07.187 | 1.00th=[ 979], 5.00th=[ 1172], 10.00th=[ 1303], 20.00th=[ 1500], 00:26:07.187 | 30.00th=[ 1713], 40.00th=[ 2180], 50.00th=[ 7570], 60.00th=[ 8586], 00:26:07.187 | 70.00th=[ 9634], 80.00th=[11338], 90.00th=[40109], 95.00th=[43779], 00:26:07.187 | 99.00th=[48497], 99.50th=[50070], 99.90th=[52691], 99.95th=[53740], 00:26:07.187 | 99.99th=[60031] 00:26:07.187 bw ( KiB/s): min=16408, max=61904, per=95.15%, avg=43690.67, stdev=12098.47, samples=12 00:26:07.187 iops : min= 4102, max=15476, avg=10922.67, stdev=3024.62, samples=12 00:26:07.187 lat (usec) : 750=0.01%, 1000=0.63% 00:26:07.187 lat (msec) : 2=18.34%, 4=1.99%, 10=15.41%, 20=40.99%, 50=22.39% 00:26:07.187 lat (msec) : 100=0.24% 00:26:07.187 cpu : usr=98.82%, sys=0.42%, ctx=26, majf=0, minf=5565 00:26:07.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:07.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.187 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:07.187 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:07.187 00:26:07.187 Run status group 0 (all jobs): 00:26:07.187 READ: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=255MiB (267MB), run=10122-10122msec 00:26:07.187 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=256MiB (268MB), run=5709-5709msec 00:26:08.122 ----------------------------------------------------- 00:26:08.122 Suppressions used: 00:26:08.122 count bytes template 00:26:08.122 1 5 /usr/src/fio/parse.c 00:26:08.122 2 192 /usr/src/fio/iolog.c 00:26:08.122 1 8 libtcmalloc_minimal.so 00:26:08.122 1 904 libcrypto.so 00:26:08.122 ----------------------------------------------------- 00:26:08.122 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:08.380 Remove shared memory files 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57995 /dev/shm/spdk_tgt_trace.pid75899 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:26:08.380 ************************************ 00:26:08.380 END TEST ftl_fio_basic 00:26:08.380 ************************************ 00:26:08.380 00:26:08.380 real 1m15.763s 00:26:08.380 user 2m49.113s 00:26:08.380 sys 0m4.145s 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.380 13:20:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:08.380 13:20:55 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:08.380 13:20:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:08.380 13:20:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.380 13:20:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:08.380 ************************************ 00:26:08.380 START TEST ftl_bdevperf 00:26:08.380 ************************************ 00:26:08.380 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:08.380 * Looking for test storage... 00:26:08.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.380 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:08.380 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:08.380 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:08.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.638 --rc genhtml_branch_coverage=1 00:26:08.638 --rc genhtml_function_coverage=1 00:26:08.638 --rc genhtml_legend=1 00:26:08.638 --rc geninfo_all_blocks=1 00:26:08.638 --rc geninfo_unexecuted_blocks=1 00:26:08.638 00:26:08.638 ' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:08.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.638 --rc genhtml_branch_coverage=1 00:26:08.638 --rc genhtml_function_coverage=1 00:26:08.638 --rc genhtml_legend=1 00:26:08.638 --rc geninfo_all_blocks=1 00:26:08.638 --rc geninfo_unexecuted_blocks=1 00:26:08.638 00:26:08.638 ' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:08.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.638 --rc genhtml_branch_coverage=1 00:26:08.638 --rc genhtml_function_coverage=1 00:26:08.638 --rc genhtml_legend=1 00:26:08.638 --rc geninfo_all_blocks=1 00:26:08.638 --rc geninfo_unexecuted_blocks=1 00:26:08.638 00:26:08.638 ' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:08.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.638 --rc genhtml_branch_coverage=1 00:26:08.638 --rc genhtml_function_coverage=1 00:26:08.638 --rc genhtml_legend=1 00:26:08.638 --rc geninfo_all_blocks=1 00:26:08.638 --rc geninfo_unexecuted_blocks=1 00:26:08.638 00:26:08.638 ' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77939 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77939 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77939 ']' 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.638 13:20:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:08.638 [2024-12-06 13:20:55.543974] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:08.638 [2024-12-06 13:20:55.544413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77939 ] 00:26:08.897 [2024-12-06 13:20:55.721912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.897 [2024-12-06 13:20:55.858811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:26:09.464 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:10.030 13:20:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:10.289 { 00:26:10.289 "name": "nvme0n1", 00:26:10.289 "aliases": [ 00:26:10.289 "3381a761-abe3-44fe-9b13-3ee52da79d03" 00:26:10.289 ], 00:26:10.289 "product_name": "NVMe disk", 00:26:10.289 "block_size": 4096, 00:26:10.289 "num_blocks": 1310720, 00:26:10.289 "uuid": "3381a761-abe3-44fe-9b13-3ee52da79d03", 00:26:10.289 "numa_id": -1, 00:26:10.289 "assigned_rate_limits": { 00:26:10.289 "rw_ios_per_sec": 0, 00:26:10.289 "rw_mbytes_per_sec": 0, 00:26:10.289 "r_mbytes_per_sec": 0, 00:26:10.289 "w_mbytes_per_sec": 0 00:26:10.289 }, 00:26:10.289 "claimed": true, 00:26:10.289 "claim_type": "read_many_write_one", 00:26:10.289 "zoned": false, 00:26:10.289 "supported_io_types": { 00:26:10.289 "read": true, 00:26:10.289 "write": true, 00:26:10.289 "unmap": true, 00:26:10.289 "flush": true, 00:26:10.289 "reset": true, 00:26:10.289 "nvme_admin": true, 00:26:10.289 "nvme_io": true, 00:26:10.289 "nvme_io_md": false, 00:26:10.289 "write_zeroes": true, 00:26:10.289 "zcopy": false, 00:26:10.289 "get_zone_info": false, 00:26:10.289 "zone_management": false, 00:26:10.289 "zone_append": false, 00:26:10.289 "compare": true, 00:26:10.289 "compare_and_write": false, 00:26:10.289 "abort": true, 00:26:10.289 "seek_hole": false, 00:26:10.289 "seek_data": false, 00:26:10.289 "copy": true, 00:26:10.289 "nvme_iov_md": false 00:26:10.289 }, 00:26:10.289 "driver_specific": { 00:26:10.289 "nvme": [ 00:26:10.289 { 00:26:10.289 "pci_address": "0000:00:11.0", 00:26:10.289 "trid": { 00:26:10.289 "trtype": "PCIe", 00:26:10.289 "traddr": "0000:00:11.0" 00:26:10.289 }, 00:26:10.289 "ctrlr_data": { 00:26:10.289 "cntlid": 0, 00:26:10.289 "vendor_id": "0x1b36", 00:26:10.289 "model_number": "QEMU NVMe Ctrl", 00:26:10.289 "serial_number": "12341", 00:26:10.289 "firmware_revision": "8.0.0", 00:26:10.289 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:10.289 "oacs": { 00:26:10.289 "security": 0, 00:26:10.289 "format": 1, 00:26:10.289 "firmware": 0, 00:26:10.289 "ns_manage": 1 00:26:10.289 }, 00:26:10.289 "multi_ctrlr": false, 00:26:10.289 "ana_reporting": false 00:26:10.289 }, 00:26:10.289 "vs": { 00:26:10.289 "nvme_version": "1.4" 00:26:10.289 }, 00:26:10.289 "ns_data": { 00:26:10.289 "id": 1, 00:26:10.289 "can_share": false 00:26:10.289 } 00:26:10.289 } 00:26:10.289 ], 00:26:10.289 "mp_policy": "active_passive" 00:26:10.289 } 00:26:10.289 } 00:26:10.289 ]' 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:10.289 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:10.548 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=32aca52f-86f8-4eb0-b7ba-e39e9db8f127 00:26:10.548 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:26:10.548 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32aca52f-86f8-4eb0-b7ba-e39e9db8f127 00:26:10.805 13:20:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:11.062 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ccadbf01-5e10-4818-a1ab-3419c5ee3b22 00:26:11.062 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ccadbf01-5e10-4818-a1ab-3419c5ee3b22 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:11.319 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:11.577 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:11.577 { 00:26:11.577 "name": "7e717bbb-a4b6-4fb2-af35-08f854a2c109", 00:26:11.577 "aliases": [ 00:26:11.577 "lvs/nvme0n1p0" 00:26:11.577 ], 00:26:11.577 "product_name": "Logical Volume", 00:26:11.577 "block_size": 4096, 00:26:11.577 "num_blocks": 26476544, 00:26:11.577 "uuid": "7e717bbb-a4b6-4fb2-af35-08f854a2c109", 00:26:11.577 "assigned_rate_limits": { 00:26:11.577 "rw_ios_per_sec": 0, 00:26:11.577 "rw_mbytes_per_sec": 0, 00:26:11.577 "r_mbytes_per_sec": 0, 00:26:11.577 "w_mbytes_per_sec": 0 00:26:11.577 }, 00:26:11.577 "claimed": false, 00:26:11.577 "zoned": false, 00:26:11.577 "supported_io_types": { 00:26:11.577 "read": true, 00:26:11.577 "write": true, 00:26:11.577 "unmap": true, 00:26:11.577 "flush": false, 00:26:11.577 "reset": true, 00:26:11.577 "nvme_admin": false, 00:26:11.577 "nvme_io": false, 00:26:11.577 "nvme_io_md": false, 00:26:11.577 "write_zeroes": true, 00:26:11.577 "zcopy": false, 00:26:11.577 "get_zone_info": false, 00:26:11.577 "zone_management": false, 00:26:11.577 "zone_append": false, 00:26:11.577 "compare": false, 00:26:11.577 "compare_and_write": false, 00:26:11.577 "abort": false, 00:26:11.577 "seek_hole": true, 00:26:11.577 "seek_data": true, 00:26:11.577 "copy": false, 00:26:11.577 "nvme_iov_md": false 00:26:11.577 }, 00:26:11.577 "driver_specific": { 00:26:11.577 "lvol": { 00:26:11.577 "lvol_store_uuid": "ccadbf01-5e10-4818-a1ab-3419c5ee3b22", 00:26:11.577 "base_bdev": "nvme0n1", 00:26:11.577 "thin_provision": true, 00:26:11.577 "num_allocated_clusters": 0, 00:26:11.577 "snapshot": false, 00:26:11.577 "clone": false, 00:26:11.577 "esnap_clone": false 00:26:11.577 } 00:26:11.577 } 00:26:11.577 } 00:26:11.577 ]' 00:26:11.577 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:11.577 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:11.577 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:11.834 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:11.834 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:11.834 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:11.834 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:26:11.834 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:26:11.834 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:12.090 13:20:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:12.347 { 00:26:12.347 "name": "7e717bbb-a4b6-4fb2-af35-08f854a2c109", 00:26:12.347 "aliases": [ 00:26:12.347 "lvs/nvme0n1p0" 00:26:12.347 ], 00:26:12.347 "product_name": "Logical Volume", 00:26:12.347 "block_size": 4096, 00:26:12.347 "num_blocks": 26476544, 00:26:12.347 "uuid": "7e717bbb-a4b6-4fb2-af35-08f854a2c109", 00:26:12.347 "assigned_rate_limits": { 00:26:12.347 "rw_ios_per_sec": 0, 00:26:12.347 "rw_mbytes_per_sec": 0, 00:26:12.347 "r_mbytes_per_sec": 0, 00:26:12.347 "w_mbytes_per_sec": 0 00:26:12.347 }, 00:26:12.347 "claimed": false, 00:26:12.347 "zoned": false, 00:26:12.347 "supported_io_types": { 00:26:12.347 "read": true, 00:26:12.347 "write": true, 00:26:12.347 "unmap": true, 00:26:12.347 "flush": false, 00:26:12.347 "reset": true, 00:26:12.347 "nvme_admin": false, 00:26:12.347 "nvme_io": false, 00:26:12.347 "nvme_io_md": false, 00:26:12.347 "write_zeroes": true, 00:26:12.347 "zcopy": false, 00:26:12.347 "get_zone_info": false, 00:26:12.347 "zone_management": false, 00:26:12.347 "zone_append": false, 00:26:12.347 "compare": false, 00:26:12.347 "compare_and_write": false, 00:26:12.347 "abort": false, 00:26:12.347 "seek_hole": true, 00:26:12.347 "seek_data": true, 00:26:12.347 "copy": false, 00:26:12.347 "nvme_iov_md": false 00:26:12.347 }, 00:26:12.347 "driver_specific": { 00:26:12.347 "lvol": { 00:26:12.347 "lvol_store_uuid": "ccadbf01-5e10-4818-a1ab-3419c5ee3b22", 00:26:12.347 "base_bdev": "nvme0n1", 00:26:12.347 "thin_provision": true, 00:26:12.347 "num_allocated_clusters": 0, 00:26:12.347 "snapshot": false, 00:26:12.347 "clone": false, 00:26:12.347 "esnap_clone": false 00:26:12.347 } 00:26:12.347 } 00:26:12.347 } 00:26:12.347 ]' 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:26:12.347 13:20:59 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:12.605 13:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:26:12.605 13:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:12.605 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:12.605 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:12.605 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:12.605 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:12.605 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e717bbb-a4b6-4fb2-af35-08f854a2c109 00:26:12.863 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:12.863 { 00:26:12.863 "name": "7e717bbb-a4b6-4fb2-af35-08f854a2c109", 00:26:12.863 "aliases": [ 00:26:12.863 "lvs/nvme0n1p0" 00:26:12.863 ], 00:26:12.863 "product_name": "Logical Volume", 00:26:12.863 "block_size": 4096, 00:26:12.863 "num_blocks": 26476544, 00:26:12.863 "uuid": "7e717bbb-a4b6-4fb2-af35-08f854a2c109", 00:26:12.863 "assigned_rate_limits": { 00:26:12.863 "rw_ios_per_sec": 0, 00:26:12.863 "rw_mbytes_per_sec": 0, 00:26:12.863 "r_mbytes_per_sec": 0, 00:26:12.863 "w_mbytes_per_sec": 0 00:26:12.863 }, 00:26:12.863 "claimed": false, 00:26:12.863 "zoned": false, 00:26:12.863 "supported_io_types": { 00:26:12.863 "read": true, 00:26:12.863 "write": true, 00:26:12.863 "unmap": true, 00:26:12.863 "flush": false, 00:26:12.863 "reset": true, 00:26:12.863 "nvme_admin": false, 00:26:12.863 "nvme_io": false, 00:26:12.863 "nvme_io_md": false, 00:26:12.863 "write_zeroes": true, 00:26:12.863 "zcopy": false, 00:26:12.863 "get_zone_info": false, 00:26:12.863 "zone_management": false, 00:26:12.863 "zone_append": false, 00:26:12.863 "compare": false, 00:26:12.863 "compare_and_write": false, 00:26:12.863 "abort": false, 00:26:12.863 "seek_hole": true, 00:26:12.863 "seek_data": true, 00:26:12.863 "copy": false, 00:26:12.863 "nvme_iov_md": false 00:26:12.863 }, 00:26:12.863 "driver_specific": { 00:26:12.863 "lvol": { 00:26:12.863 "lvol_store_uuid": "ccadbf01-5e10-4818-a1ab-3419c5ee3b22", 00:26:12.863 "base_bdev": "nvme0n1", 00:26:12.863 "thin_provision": true, 00:26:12.863 "num_allocated_clusters": 0, 00:26:12.863 "snapshot": false, 00:26:12.863 "clone": false, 00:26:12.863 "esnap_clone": false 00:26:12.863 } 00:26:12.863 } 00:26:12.863 } 00:26:12.863 ]' 00:26:12.863 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:13.122 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:13.122 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:13.122 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:13.122 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:13.122 13:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:13.122 13:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:26:13.122 13:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7e717bbb-a4b6-4fb2-af35-08f854a2c109 -c nvc0n1p0 --l2p_dram_limit 20 00:26:13.382 [2024-12-06 13:21:00.201029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.201099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:13.382 [2024-12-06 13:21:00.201154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:13.382 [2024-12-06 13:21:00.201174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.382 [2024-12-06 13:21:00.201256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.201277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.382 [2024-12-06 13:21:00.201291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:13.382 [2024-12-06 13:21:00.201306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.382 [2024-12-06 13:21:00.201333] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:13.382 [2024-12-06 13:21:00.202361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:13.382 [2024-12-06 13:21:00.202542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.202570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.382 [2024-12-06 13:21:00.202585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:26:13.382 [2024-12-06 13:21:00.202600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.382 [2024-12-06 13:21:00.202756] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ffce2295-1387-428b-9f6d-86630af2fc43 00:26:13.382 [2024-12-06 13:21:00.204574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.204613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:13.382 [2024-12-06 13:21:00.204644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:13.382 [2024-12-06 13:21:00.204657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.382 [2024-12-06 13:21:00.214441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.214494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.382 [2024-12-06 13:21:00.214514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.712 ms 00:26:13.382 [2024-12-06 13:21:00.214531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.382 [2024-12-06 13:21:00.214708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.214731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.382 [2024-12-06 13:21:00.214751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:13.382 [2024-12-06 13:21:00.214764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.382 [2024-12-06 13:21:00.214839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.214857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:13.382 [2024-12-06 13:21:00.214872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:13.382 [2024-12-06 13:21:00.214885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.382 [2024-12-06 13:21:00.214921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:13.382 [2024-12-06 13:21:00.220168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.382 [2024-12-06 13:21:00.220211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.383 [2024-12-06 13:21:00.220227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:26:13.383 [2024-12-06 13:21:00.220245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.383 [2024-12-06 13:21:00.220290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.383 [2024-12-06 13:21:00.220308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:13.383 [2024-12-06 13:21:00.220321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:13.383 [2024-12-06 13:21:00.220335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.383 [2024-12-06 13:21:00.220379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:13.383 [2024-12-06 13:21:00.220550] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:13.383 [2024-12-06 13:21:00.220570] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:13.383 [2024-12-06 13:21:00.220588] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:13.383 [2024-12-06 13:21:00.220603] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:13.383 [2024-12-06 13:21:00.220619] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:13.383 [2024-12-06 13:21:00.220632] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:13.383 [2024-12-06 13:21:00.220648] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:13.383 [2024-12-06 13:21:00.220660] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:13.383 [2024-12-06 13:21:00.220673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:13.383 [2024-12-06 13:21:00.220689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.383 [2024-12-06 13:21:00.220703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:13.383 [2024-12-06 13:21:00.220715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:26:13.383 [2024-12-06 13:21:00.220729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.383 [2024-12-06 13:21:00.220824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.383 [2024-12-06 13:21:00.220841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:13.383 [2024-12-06 13:21:00.220853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:13.383 [2024-12-06 13:21:00.220870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.383 [2024-12-06 13:21:00.220970] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:13.383 [2024-12-06 13:21:00.220999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:13.383 [2024-12-06 13:21:00.221011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:13.383 [2024-12-06 13:21:00.221051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:13.383 [2024-12-06 13:21:00.221085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.383 [2024-12-06 13:21:00.221108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:13.383 [2024-12-06 13:21:00.221163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:13.383 [2024-12-06 13:21:00.221176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.383 [2024-12-06 13:21:00.221200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:13.383 [2024-12-06 13:21:00.221212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:13.383 [2024-12-06 13:21:00.221228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:13.383 [2024-12-06 13:21:00.221252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:13.383 [2024-12-06 13:21:00.221287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:13.383 [2024-12-06 13:21:00.221328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:13.383 [2024-12-06 13:21:00.221372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:13.383 [2024-12-06 13:21:00.221415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:13.383 [2024-12-06 13:21:00.221457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.383 [2024-12-06 13:21:00.221480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:13.383 [2024-12-06 13:21:00.221495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:13.383 [2024-12-06 13:21:00.221506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.383 [2024-12-06 13:21:00.221519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:13.383 [2024-12-06 13:21:00.221530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:13.383 [2024-12-06 13:21:00.221543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:13.383 [2024-12-06 13:21:00.221567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:13.383 [2024-12-06 13:21:00.221577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221589] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:13.383 [2024-12-06 13:21:00.221602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:13.383 [2024-12-06 13:21:00.221615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.383 [2024-12-06 13:21:00.221644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:13.383 [2024-12-06 13:21:00.221655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:13.383 [2024-12-06 13:21:00.221668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:13.383 [2024-12-06 13:21:00.221679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:13.383 [2024-12-06 13:21:00.221692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:13.383 [2024-12-06 13:21:00.221703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:13.383 [2024-12-06 13:21:00.221718] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:13.383 [2024-12-06 13:21:00.221733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.383 [2024-12-06 13:21:00.221751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:13.383 [2024-12-06 13:21:00.221763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:13.383 [2024-12-06 13:21:00.221777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:13.383 [2024-12-06 13:21:00.221789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:13.383 [2024-12-06 13:21:00.221802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:13.383 [2024-12-06 13:21:00.221814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:13.383 [2024-12-06 13:21:00.221828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:13.383 [2024-12-06 13:21:00.221839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:13.383 [2024-12-06 13:21:00.221857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:13.383 [2024-12-06 13:21:00.221869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:13.383 [2024-12-06 13:21:00.221882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:13.383 [2024-12-06 13:21:00.221894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:13.383 [2024-12-06 13:21:00.221908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:13.383 [2024-12-06 13:21:00.221921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:13.383 [2024-12-06 13:21:00.221934] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:13.383 [2024-12-06 13:21:00.221948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.383 [2024-12-06 13:21:00.221966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:13.383 [2024-12-06 13:21:00.221978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:13.384 [2024-12-06 13:21:00.221992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:13.384 [2024-12-06 13:21:00.222004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:13.384 [2024-12-06 13:21:00.222019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.384 [2024-12-06 13:21:00.222031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:13.384 [2024-12-06 13:21:00.222046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:26:13.384 [2024-12-06 13:21:00.222058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.384 [2024-12-06 13:21:00.222113] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:13.384 [2024-12-06 13:21:00.222158] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:15.923 [2024-12-06 13:21:02.784220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.923 [2024-12-06 13:21:02.784295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:15.923 [2024-12-06 13:21:02.784338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2562.117 ms 00:26:15.923 [2024-12-06 13:21:02.784351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.923 [2024-12-06 13:21:02.822854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.923 [2024-12-06 13:21:02.823093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:15.923 [2024-12-06 13:21:02.823165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.204 ms 00:26:15.923 [2024-12-06 13:21:02.823184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.923 [2024-12-06 13:21:02.823395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.923 [2024-12-06 13:21:02.823415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:15.923 [2024-12-06 13:21:02.823435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:15.923 [2024-12-06 13:21:02.823447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.923 [2024-12-06 13:21:02.879851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.923 [2024-12-06 13:21:02.879926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:15.923 [2024-12-06 13:21:02.879954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.345 ms 00:26:15.923 [2024-12-06 13:21:02.879967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.923 [2024-12-06 13:21:02.880034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.923 [2024-12-06 13:21:02.880049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.924 [2024-12-06 13:21:02.880066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:15.924 [2024-12-06 13:21:02.880081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.924 [2024-12-06 13:21:02.880809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.924 [2024-12-06 13:21:02.880835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.924 [2024-12-06 13:21:02.880853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:26:15.924 [2024-12-06 13:21:02.880865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.924 [2024-12-06 13:21:02.881045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.924 [2024-12-06 13:21:02.881063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.924 [2024-12-06 13:21:02.881080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:26:15.924 [2024-12-06 13:21:02.881092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.924 [2024-12-06 13:21:02.900588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.924 [2024-12-06 13:21:02.900636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.924 [2024-12-06 13:21:02.900658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.467 ms 00:26:15.924 [2024-12-06 13:21:02.900686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.924 [2024-12-06 13:21:02.914935] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:26:15.924 [2024-12-06 13:21:02.922774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.924 [2024-12-06 13:21:02.922818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:15.924 [2024-12-06 13:21:02.922853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.947 ms 00:26:15.924 [2024-12-06 13:21:02.922867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.182 [2024-12-06 13:21:02.993726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.182 [2024-12-06 13:21:02.993828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:16.182 [2024-12-06 13:21:02.993850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.814 ms 00:26:16.182 [2024-12-06 13:21:02.993866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.182 [2024-12-06 13:21:02.994091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.182 [2024-12-06 13:21:02.994126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:16.182 [2024-12-06 13:21:02.994140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:26:16.182 [2024-12-06 13:21:02.994158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.182 [2024-12-06 13:21:03.022755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.182 [2024-12-06 13:21:03.022991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:16.183 [2024-12-06 13:21:03.023021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.491 ms 00:26:16.183 [2024-12-06 13:21:03.023039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.183 [2024-12-06 13:21:03.051303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.183 [2024-12-06 13:21:03.051348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:16.183 [2024-12-06 13:21:03.051381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.233 ms 00:26:16.183 [2024-12-06 13:21:03.051395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.183 [2024-12-06 13:21:03.052263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.183 [2024-12-06 13:21:03.052297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:16.183 [2024-12-06 13:21:03.052314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:26:16.183 [2024-12-06 13:21:03.052328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.183 [2024-12-06 13:21:03.132585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.183 [2024-12-06 13:21:03.132673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:16.183 [2024-12-06 13:21:03.132693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.193 ms 00:26:16.183 [2024-12-06 13:21:03.132708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.183 [2024-12-06 13:21:03.165282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.183 [2024-12-06 13:21:03.165336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:16.183 [2024-12-06 13:21:03.165359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.480 ms 00:26:16.183 [2024-12-06 13:21:03.165375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.183 [2024-12-06 13:21:03.196489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.450 [2024-12-06 13:21:03.196685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:16.450 [2024-12-06 13:21:03.196714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.068 ms 00:26:16.450 [2024-12-06 13:21:03.196730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.450 [2024-12-06 13:21:03.227869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.450 [2024-12-06 13:21:03.227927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:16.450 [2024-12-06 13:21:03.227946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.091 ms 00:26:16.450 [2024-12-06 13:21:03.227962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.450 [2024-12-06 13:21:03.228015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.450 [2024-12-06 13:21:03.228040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:16.450 [2024-12-06 13:21:03.228054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:16.450 [2024-12-06 13:21:03.228069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.450 [2024-12-06 13:21:03.228223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.450 [2024-12-06 13:21:03.228249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:16.450 [2024-12-06 13:21:03.228263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:16.450 [2024-12-06 13:21:03.228277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.450 [2024-12-06 13:21:03.229554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3028.015 ms, result 0 00:26:16.450 { 00:26:16.450 "name": "ftl0", 00:26:16.450 "uuid": "ffce2295-1387-428b-9f6d-86630af2fc43" 00:26:16.450 } 00:26:16.450 13:21:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:26:16.450 13:21:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:26:16.450 13:21:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:26:16.709 13:21:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:26:16.709 [2024-12-06 13:21:03.677979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:16.709 I/O size of 69632 is greater than zero copy threshold (65536). 00:26:16.709 Zero copy mechanism will not be used. 00:26:16.709 Running I/O for 4 seconds... 00:26:19.032 1763.00 IOPS, 117.07 MiB/s [2024-12-06T13:21:06.988Z] 1763.00 IOPS, 117.07 MiB/s [2024-12-06T13:21:07.924Z] 1769.00 IOPS, 117.47 MiB/s [2024-12-06T13:21:07.924Z] 1776.50 IOPS, 117.97 MiB/s 00:26:20.908 Latency(us) 00:26:20.908 [2024-12-06T13:21:07.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.908 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:26:20.908 ftl0 : 4.00 1775.67 117.92 0.00 0.00 589.73 230.87 2517.18 00:26:20.908 [2024-12-06T13:21:07.924Z] =================================================================================================================== 00:26:20.908 [2024-12-06T13:21:07.924Z] Total : 1775.67 117.92 0.00 0.00 589.73 230.87 2517.18 00:26:20.908 [2024-12-06 13:21:07.691407] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:20.908 { 00:26:20.908 "results": [ 00:26:20.908 { 00:26:20.908 "job": "ftl0", 00:26:20.908 "core_mask": "0x1", 00:26:20.908 "workload": "randwrite", 00:26:20.908 "status": "finished", 00:26:20.908 "queue_depth": 1, 00:26:20.908 "io_size": 69632, 00:26:20.908 "runtime": 4.002439, 00:26:20.908 "iops": 1775.66728687183, 00:26:20.908 "mibps": 117.91540576883246, 00:26:20.908 "io_failed": 0, 00:26:20.908 "io_timeout": 0, 00:26:20.908 "avg_latency_us": 589.7290098110698, 00:26:20.908 "min_latency_us": 230.86545454545455, 00:26:20.908 "max_latency_us": 2517.1781818181817 00:26:20.908 } 00:26:20.908 ], 00:26:20.908 "core_count": 1 00:26:20.908 } 00:26:20.908 13:21:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:26:20.908 [2024-12-06 13:21:07.833987] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:20.908 Running I/O for 4 seconds... 00:26:23.218 7301.00 IOPS, 28.52 MiB/s [2024-12-06T13:21:11.170Z] 7380.50 IOPS, 28.83 MiB/s [2024-12-06T13:21:12.104Z] 7343.67 IOPS, 28.69 MiB/s [2024-12-06T13:21:12.104Z] 7217.50 IOPS, 28.19 MiB/s 00:26:25.088 Latency(us) 00:26:25.088 [2024-12-06T13:21:12.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.088 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:26:25.089 ftl0 : 4.02 7210.46 28.17 0.00 0.00 17703.35 310.92 32172.22 00:26:25.089 [2024-12-06T13:21:12.105Z] =================================================================================================================== 00:26:25.089 [2024-12-06T13:21:12.105Z] Total : 7210.46 28.17 0.00 0.00 17703.35 0.00 32172.22 00:26:25.089 [2024-12-06 13:21:11.865859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:25.089 { 00:26:25.089 "results": [ 00:26:25.089 { 00:26:25.089 "job": "ftl0", 00:26:25.089 "core_mask": "0x1", 00:26:25.089 "workload": "randwrite", 00:26:25.089 "status": "finished", 00:26:25.089 "queue_depth": 128, 00:26:25.089 "io_size": 4096, 00:26:25.089 "runtime": 4.021379, 00:26:25.089 "iops": 7210.461883846312, 00:26:25.089 "mibps": 28.165866733774656, 00:26:25.089 "io_failed": 0, 00:26:25.089 "io_timeout": 0, 00:26:25.089 "avg_latency_us": 17703.353189781665, 00:26:25.089 "min_latency_us": 310.9236363636364, 00:26:25.089 "max_latency_us": 32172.21818181818 00:26:25.089 } 00:26:25.089 ], 00:26:25.089 "core_count": 1 00:26:25.089 } 00:26:25.089 13:21:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:26:25.089 [2024-12-06 13:21:12.040813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:25.089 Running I/O for 4 seconds... 00:26:27.395 6172.00 IOPS, 24.11 MiB/s [2024-12-06T13:21:15.343Z] 6189.50 IOPS, 24.18 MiB/s [2024-12-06T13:21:16.273Z] 6187.67 IOPS, 24.17 MiB/s [2024-12-06T13:21:16.273Z] 6168.75 IOPS, 24.10 MiB/s 00:26:29.257 Latency(us) 00:26:29.257 [2024-12-06T13:21:16.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.257 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.257 Verification LBA range: start 0x0 length 0x1400000 00:26:29.257 ftl0 : 4.01 6180.39 24.14 0.00 0.00 20639.36 364.92 22997.18 00:26:29.257 [2024-12-06T13:21:16.273Z] =================================================================================================================== 00:26:29.257 [2024-12-06T13:21:16.273Z] Total : 6180.39 24.14 0.00 0.00 20639.36 0.00 22997.18 00:26:29.257 { 00:26:29.257 "results": [ 00:26:29.257 { 00:26:29.257 "job": "ftl0", 00:26:29.257 "core_mask": "0x1", 00:26:29.257 "workload": "verify", 00:26:29.257 "status": "finished", 00:26:29.257 "verify_range": { 00:26:29.257 "start": 0, 00:26:29.257 "length": 20971520 00:26:29.257 }, 00:26:29.257 "queue_depth": 128, 00:26:29.257 "io_size": 4096, 00:26:29.257 "runtime": 4.013176, 00:26:29.257 "iops": 6180.391789445566, 00:26:29.257 "mibps": 24.142155427521743, 00:26:29.257 "io_failed": 0, 00:26:29.257 "io_timeout": 0, 00:26:29.257 "avg_latency_us": 20639.357126447314, 00:26:29.257 "min_latency_us": 364.91636363636366, 00:26:29.257 "max_latency_us": 22997.17818181818 00:26:29.257 } 00:26:29.257 ], 00:26:29.257 "core_count": 1 00:26:29.257 } 00:26:29.257 [2024-12-06 13:21:16.073041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:29.257 13:21:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:26:29.514 [2024-12-06 13:21:16.387047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.514 [2024-12-06 13:21:16.387108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:29.514 [2024-12-06 13:21:16.387166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:29.514 [2024-12-06 13:21:16.387186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.514 [2024-12-06 13:21:16.387222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:29.514 [2024-12-06 13:21:16.391011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.514 [2024-12-06 13:21:16.391055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:29.514 [2024-12-06 13:21:16.391072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.761 ms 00:26:29.514 [2024-12-06 13:21:16.391084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.514 [2024-12-06 13:21:16.393124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.514 [2024-12-06 13:21:16.393327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:29.514 [2024-12-06 13:21:16.393507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.011 ms 00:26:29.514 [2024-12-06 13:21:16.393559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.579071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.579380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:29.772 [2024-12-06 13:21:16.579532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 185.371 ms 00:26:29.772 [2024-12-06 13:21:16.579586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.586399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.586564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:29.772 [2024-12-06 13:21:16.586690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.657 ms 00:26:29.772 [2024-12-06 13:21:16.586744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.618962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.619179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:29.772 [2024-12-06 13:21:16.619213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.098 ms 00:26:29.772 [2024-12-06 13:21:16.619227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.637258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.637300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:29.772 [2024-12-06 13:21:16.637336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.981 ms 00:26:29.772 [2024-12-06 13:21:16.637348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.637520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.637540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:29.772 [2024-12-06 13:21:16.637558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:26:29.772 [2024-12-06 13:21:16.637569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.667541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.667582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:29.772 [2024-12-06 13:21:16.667618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.945 ms 00:26:29.772 [2024-12-06 13:21:16.667628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.696028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.696081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:29.772 [2024-12-06 13:21:16.696117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.354 ms 00:26:29.772 [2024-12-06 13:21:16.696128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.722886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.723100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:29.772 [2024-12-06 13:21:16.723164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.675 ms 00:26:29.772 [2024-12-06 13:21:16.723181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.749908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.772 [2024-12-06 13:21:16.750115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:29.772 [2024-12-06 13:21:16.750161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.615 ms 00:26:29.772 [2024-12-06 13:21:16.750174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.772 [2024-12-06 13:21:16.750221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:29.772 [2024-12-06 13:21:16.750270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:29.772 [2024-12-06 13:21:16.750525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.750997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:29.773 [2024-12-06 13:21:16.751755] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:29.773 [2024-12-06 13:21:16.751769] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ffce2295-1387-428b-9f6d-86630af2fc43 00:26:29.773 [2024-12-06 13:21:16.751785] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:29.773 [2024-12-06 13:21:16.751798] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:29.773 [2024-12-06 13:21:16.751808] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:29.773 [2024-12-06 13:21:16.751822] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:29.773 [2024-12-06 13:21:16.751833] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:29.773 [2024-12-06 13:21:16.751847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:29.773 [2024-12-06 13:21:16.751857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:29.773 [2024-12-06 13:21:16.751872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:29.774 [2024-12-06 13:21:16.751883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:29.774 [2024-12-06 13:21:16.751897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.774 [2024-12-06 13:21:16.751908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:29.774 [2024-12-06 13:21:16.751923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:26:29.774 [2024-12-06 13:21:16.751936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.774 [2024-12-06 13:21:16.767800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.774 [2024-12-06 13:21:16.767837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:29.774 [2024-12-06 13:21:16.767873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.799 ms 00:26:29.774 [2024-12-06 13:21:16.767884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.774 [2024-12-06 13:21:16.768360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.774 [2024-12-06 13:21:16.768429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:29.774 [2024-12-06 13:21:16.768453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:26:29.774 [2024-12-06 13:21:16.768465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.811350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.811393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:30.032 [2024-12-06 13:21:16.811430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.032 [2024-12-06 13:21:16.811441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.811507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.811521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:30.032 [2024-12-06 13:21:16.811535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.032 [2024-12-06 13:21:16.811545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.811639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.811657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:30.032 [2024-12-06 13:21:16.811671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.032 [2024-12-06 13:21:16.811681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.811712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.811725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:30.032 [2024-12-06 13:21:16.811737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.032 [2024-12-06 13:21:16.811747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.905839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.905918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:30.032 [2024-12-06 13:21:16.905975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.032 [2024-12-06 13:21:16.906003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.988068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.988168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:30.032 [2024-12-06 13:21:16.988208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.032 [2024-12-06 13:21:16.988220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.988388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.988408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:30.032 [2024-12-06 13:21:16.988423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.032 [2024-12-06 13:21:16.988434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.032 [2024-12-06 13:21:16.988503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.032 [2024-12-06 13:21:16.988521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:30.033 [2024-12-06 13:21:16.988536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.033 [2024-12-06 13:21:16.988568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.033 [2024-12-06 13:21:16.988729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.033 [2024-12-06 13:21:16.988751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:30.033 [2024-12-06 13:21:16.988770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.033 [2024-12-06 13:21:16.988781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.033 [2024-12-06 13:21:16.988840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.033 [2024-12-06 13:21:16.988865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:30.033 [2024-12-06 13:21:16.988891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.033 [2024-12-06 13:21:16.988902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.033 [2024-12-06 13:21:16.988956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.033 [2024-12-06 13:21:16.988974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:30.033 [2024-12-06 13:21:16.988989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.033 [2024-12-06 13:21:16.989013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.033 [2024-12-06 13:21:16.989073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.033 [2024-12-06 13:21:16.989090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:30.033 [2024-12-06 13:21:16.989105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.033 [2024-12-06 13:21:16.989116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.033 [2024-12-06 13:21:16.989476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 602.198 ms, result 0 00:26:30.033 true 00:26:30.033 13:21:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77939 00:26:30.033 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77939 ']' 00:26:30.033 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77939 00:26:30.033 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:30.033 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.033 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77939 00:26:30.290 killing process with pid 77939 00:26:30.290 Received shutdown signal, test time was about 4.000000 seconds 00:26:30.290 00:26:30.290 Latency(us) 00:26:30.290 [2024-12-06T13:21:17.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.290 [2024-12-06T13:21:17.306Z] =================================================================================================================== 00:26:30.290 [2024-12-06T13:21:17.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.290 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.290 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.290 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77939' 00:26:30.290 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77939 00:26:30.290 13:21:17 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77939 00:26:33.643 Remove shared memory files 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:33.643 13:21:20 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:26:33.643 ************************************ 00:26:33.643 END TEST ftl_bdevperf 00:26:33.643 ************************************ 00:26:33.643 00:26:33.643 real 0m25.344s 00:26:33.644 user 0m29.011s 00:26:33.644 sys 0m1.215s 00:26:33.644 13:21:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.644 13:21:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:33.644 13:21:20 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:26:33.644 13:21:20 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:33.644 13:21:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.644 13:21:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:33.644 ************************************ 00:26:33.644 START TEST ftl_trim 00:26:33.644 ************************************ 00:26:33.644 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:26:33.903 * Looking for test storage... 00:26:33.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.903 13:21:20 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:33.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.903 --rc genhtml_branch_coverage=1 00:26:33.903 --rc genhtml_function_coverage=1 00:26:33.903 --rc genhtml_legend=1 00:26:33.903 --rc geninfo_all_blocks=1 00:26:33.903 --rc geninfo_unexecuted_blocks=1 00:26:33.903 00:26:33.903 ' 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:33.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.903 --rc genhtml_branch_coverage=1 00:26:33.903 --rc genhtml_function_coverage=1 00:26:33.903 --rc genhtml_legend=1 00:26:33.903 --rc geninfo_all_blocks=1 00:26:33.903 --rc geninfo_unexecuted_blocks=1 00:26:33.903 00:26:33.903 ' 00:26:33.903 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.904 --rc genhtml_branch_coverage=1 00:26:33.904 --rc genhtml_function_coverage=1 00:26:33.904 --rc genhtml_legend=1 00:26:33.904 --rc geninfo_all_blocks=1 00:26:33.904 --rc geninfo_unexecuted_blocks=1 00:26:33.904 00:26:33.904 ' 00:26:33.904 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.904 --rc genhtml_branch_coverage=1 00:26:33.904 --rc genhtml_function_coverage=1 00:26:33.904 --rc genhtml_legend=1 00:26:33.904 --rc geninfo_all_blocks=1 00:26:33.904 --rc geninfo_unexecuted_blocks=1 00:26:33.904 00:26:33.904 ' 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78290 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:26:33.904 13:21:20 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78290 00:26:33.904 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78290 ']' 00:26:33.904 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.904 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.904 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.904 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.904 13:21:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:34.163 [2024-12-06 13:21:21.005580] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:34.163 [2024-12-06 13:21:21.006231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78290 ] 00:26:34.421 [2024-12-06 13:21:21.200383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:34.421 [2024-12-06 13:21:21.353392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.421 [2024-12-06 13:21:21.353505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.421 [2024-12-06 13:21:21.353505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.354 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.355 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:26:35.355 13:21:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:35.355 13:21:22 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:26:35.355 13:21:22 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:35.355 13:21:22 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:26:35.355 13:21:22 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:26:35.355 13:21:22 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:35.613 13:21:22 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:35.613 13:21:22 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:26:35.613 13:21:22 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:35.613 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:35.613 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:35.613 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:35.613 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:35.613 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:35.871 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:35.871 { 00:26:35.871 "name": "nvme0n1", 00:26:35.871 "aliases": [ 00:26:35.871 "a385db0d-5b06-442e-ac4e-10b7bd3c1405" 00:26:35.871 ], 00:26:35.871 "product_name": "NVMe disk", 00:26:35.871 "block_size": 4096, 00:26:35.871 "num_blocks": 1310720, 00:26:35.871 "uuid": "a385db0d-5b06-442e-ac4e-10b7bd3c1405", 00:26:35.871 "numa_id": -1, 00:26:35.871 "assigned_rate_limits": { 00:26:35.871 "rw_ios_per_sec": 0, 00:26:35.871 "rw_mbytes_per_sec": 0, 00:26:35.871 "r_mbytes_per_sec": 0, 00:26:35.871 "w_mbytes_per_sec": 0 00:26:35.871 }, 00:26:35.871 "claimed": true, 00:26:35.871 "claim_type": "read_many_write_one", 00:26:35.871 "zoned": false, 00:26:35.871 "supported_io_types": { 00:26:35.871 "read": true, 00:26:35.871 "write": true, 00:26:35.871 "unmap": true, 00:26:35.871 "flush": true, 00:26:35.871 "reset": true, 00:26:35.871 "nvme_admin": true, 00:26:35.871 "nvme_io": true, 00:26:35.871 "nvme_io_md": false, 00:26:35.871 "write_zeroes": true, 00:26:35.871 "zcopy": false, 00:26:35.871 "get_zone_info": false, 00:26:35.871 "zone_management": false, 00:26:35.871 "zone_append": false, 00:26:35.871 "compare": true, 00:26:35.871 "compare_and_write": false, 00:26:35.871 "abort": true, 00:26:35.871 "seek_hole": false, 00:26:35.871 "seek_data": false, 00:26:35.871 "copy": true, 00:26:35.871 "nvme_iov_md": false 00:26:35.871 }, 00:26:35.871 "driver_specific": { 00:26:35.871 "nvme": [ 00:26:35.871 { 00:26:35.871 "pci_address": "0000:00:11.0", 00:26:35.871 "trid": { 00:26:35.871 "trtype": "PCIe", 00:26:35.871 "traddr": "0000:00:11.0" 00:26:35.871 }, 00:26:35.871 "ctrlr_data": { 00:26:35.871 "cntlid": 0, 00:26:35.871 "vendor_id": "0x1b36", 00:26:35.871 "model_number": "QEMU NVMe Ctrl", 00:26:35.871 "serial_number": "12341", 00:26:35.871 "firmware_revision": "8.0.0", 00:26:35.871 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:35.871 "oacs": { 00:26:35.871 "security": 0, 00:26:35.871 "format": 1, 00:26:35.871 "firmware": 0, 00:26:35.871 "ns_manage": 1 00:26:35.871 }, 00:26:35.871 "multi_ctrlr": false, 00:26:35.871 "ana_reporting": false 00:26:35.871 }, 00:26:35.871 "vs": { 00:26:35.871 "nvme_version": "1.4" 00:26:35.871 }, 00:26:35.871 "ns_data": { 00:26:35.871 "id": 1, 00:26:35.871 "can_share": false 00:26:35.871 } 00:26:35.871 } 00:26:35.871 ], 00:26:35.871 "mp_policy": "active_passive" 00:26:35.871 } 00:26:35.871 } 00:26:35.871 ]' 00:26:35.871 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:35.871 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:35.871 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:36.134 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:36.134 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:36.134 13:21:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:26:36.134 13:21:22 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:26:36.134 13:21:22 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:36.134 13:21:22 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:26:36.134 13:21:22 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:36.134 13:21:22 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:36.403 13:21:23 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ccadbf01-5e10-4818-a1ab-3419c5ee3b22 00:26:36.403 13:21:23 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:26:36.403 13:21:23 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ccadbf01-5e10-4818-a1ab-3419c5ee3b22 00:26:36.661 13:21:23 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:36.918 13:21:23 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d38fcbc9-594f-4f1c-8555-a09f64d2fb1c 00:26:36.918 13:21:23 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d38fcbc9-594f-4f1c-8555-a09f64d2fb1c 00:26:37.176 13:21:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.176 13:21:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.176 13:21:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:26:37.176 13:21:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:37.176 13:21:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.176 13:21:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:26:37.176 13:21:24 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.176 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.176 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:37.176 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:37.176 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:37.176 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.433 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:37.434 { 00:26:37.434 "name": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:37.434 "aliases": [ 00:26:37.434 "lvs/nvme0n1p0" 00:26:37.434 ], 00:26:37.434 "product_name": "Logical Volume", 00:26:37.434 "block_size": 4096, 00:26:37.434 "num_blocks": 26476544, 00:26:37.434 "uuid": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:37.434 "assigned_rate_limits": { 00:26:37.434 "rw_ios_per_sec": 0, 00:26:37.434 "rw_mbytes_per_sec": 0, 00:26:37.434 "r_mbytes_per_sec": 0, 00:26:37.434 "w_mbytes_per_sec": 0 00:26:37.434 }, 00:26:37.434 "claimed": false, 00:26:37.434 "zoned": false, 00:26:37.434 "supported_io_types": { 00:26:37.434 "read": true, 00:26:37.434 "write": true, 00:26:37.434 "unmap": true, 00:26:37.434 "flush": false, 00:26:37.434 "reset": true, 00:26:37.434 "nvme_admin": false, 00:26:37.434 "nvme_io": false, 00:26:37.434 "nvme_io_md": false, 00:26:37.434 "write_zeroes": true, 00:26:37.434 "zcopy": false, 00:26:37.434 "get_zone_info": false, 00:26:37.434 "zone_management": false, 00:26:37.434 "zone_append": false, 00:26:37.434 "compare": false, 00:26:37.434 "compare_and_write": false, 00:26:37.434 "abort": false, 00:26:37.434 "seek_hole": true, 00:26:37.434 "seek_data": true, 00:26:37.434 "copy": false, 00:26:37.434 "nvme_iov_md": false 00:26:37.434 }, 00:26:37.434 "driver_specific": { 00:26:37.434 "lvol": { 00:26:37.434 "lvol_store_uuid": "d38fcbc9-594f-4f1c-8555-a09f64d2fb1c", 00:26:37.434 "base_bdev": "nvme0n1", 00:26:37.434 "thin_provision": true, 00:26:37.434 "num_allocated_clusters": 0, 00:26:37.434 "snapshot": false, 00:26:37.434 "clone": false, 00:26:37.434 "esnap_clone": false 00:26:37.434 } 00:26:37.434 } 00:26:37.434 } 00:26:37.434 ]' 00:26:37.434 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:37.434 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:37.434 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:37.434 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:37.434 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:37.434 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:26:37.434 13:21:24 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:26:37.434 13:21:24 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:26:37.434 13:21:24 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:37.998 13:21:24 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:37.998 13:21:24 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:37.998 13:21:24 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.998 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.998 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:37.998 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:37.998 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:37.998 13:21:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:37.998 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:37.998 { 00:26:37.998 "name": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:37.998 "aliases": [ 00:26:37.998 "lvs/nvme0n1p0" 00:26:37.998 ], 00:26:37.998 "product_name": "Logical Volume", 00:26:37.998 "block_size": 4096, 00:26:37.998 "num_blocks": 26476544, 00:26:37.998 "uuid": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:37.998 "assigned_rate_limits": { 00:26:37.998 "rw_ios_per_sec": 0, 00:26:37.998 "rw_mbytes_per_sec": 0, 00:26:37.998 "r_mbytes_per_sec": 0, 00:26:37.998 "w_mbytes_per_sec": 0 00:26:37.998 }, 00:26:37.998 "claimed": false, 00:26:37.998 "zoned": false, 00:26:37.998 "supported_io_types": { 00:26:37.998 "read": true, 00:26:37.998 "write": true, 00:26:37.998 "unmap": true, 00:26:37.998 "flush": false, 00:26:37.998 "reset": true, 00:26:37.998 "nvme_admin": false, 00:26:37.998 "nvme_io": false, 00:26:37.998 "nvme_io_md": false, 00:26:37.998 "write_zeroes": true, 00:26:37.998 "zcopy": false, 00:26:37.998 "get_zone_info": false, 00:26:37.998 "zone_management": false, 00:26:37.998 "zone_append": false, 00:26:37.998 "compare": false, 00:26:37.998 "compare_and_write": false, 00:26:37.998 "abort": false, 00:26:37.998 "seek_hole": true, 00:26:37.998 "seek_data": true, 00:26:37.998 "copy": false, 00:26:37.999 "nvme_iov_md": false 00:26:37.999 }, 00:26:37.999 "driver_specific": { 00:26:37.999 "lvol": { 00:26:37.999 "lvol_store_uuid": "d38fcbc9-594f-4f1c-8555-a09f64d2fb1c", 00:26:37.999 "base_bdev": "nvme0n1", 00:26:37.999 "thin_provision": true, 00:26:37.999 "num_allocated_clusters": 0, 00:26:37.999 "snapshot": false, 00:26:37.999 "clone": false, 00:26:37.999 "esnap_clone": false 00:26:37.999 } 00:26:37.999 } 00:26:37.999 } 00:26:37.999 ]' 00:26:37.999 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:38.273 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:38.273 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:38.273 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:38.273 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:38.273 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:26:38.273 13:21:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:26:38.273 13:21:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:38.530 13:21:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:26:38.530 13:21:25 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:26:38.530 13:21:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:38.530 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:38.530 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:38.531 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:38.531 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:38.531 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4247469d-b0b2-4a3d-b622-23eaf1116705 00:26:38.789 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:38.789 { 00:26:38.789 "name": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:38.789 "aliases": [ 00:26:38.789 "lvs/nvme0n1p0" 00:26:38.789 ], 00:26:38.789 "product_name": "Logical Volume", 00:26:38.789 "block_size": 4096, 00:26:38.789 "num_blocks": 26476544, 00:26:38.789 "uuid": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:38.789 "assigned_rate_limits": { 00:26:38.789 "rw_ios_per_sec": 0, 00:26:38.789 "rw_mbytes_per_sec": 0, 00:26:38.789 "r_mbytes_per_sec": 0, 00:26:38.789 "w_mbytes_per_sec": 0 00:26:38.789 }, 00:26:38.789 "claimed": false, 00:26:38.789 "zoned": false, 00:26:38.789 "supported_io_types": { 00:26:38.789 "read": true, 00:26:38.789 "write": true, 00:26:38.789 "unmap": true, 00:26:38.789 "flush": false, 00:26:38.789 "reset": true, 00:26:38.789 "nvme_admin": false, 00:26:38.789 "nvme_io": false, 00:26:38.789 "nvme_io_md": false, 00:26:38.789 "write_zeroes": true, 00:26:38.789 "zcopy": false, 00:26:38.789 "get_zone_info": false, 00:26:38.789 "zone_management": false, 00:26:38.789 "zone_append": false, 00:26:38.789 "compare": false, 00:26:38.789 "compare_and_write": false, 00:26:38.789 "abort": false, 00:26:38.789 "seek_hole": true, 00:26:38.789 "seek_data": true, 00:26:38.789 "copy": false, 00:26:38.789 "nvme_iov_md": false 00:26:38.789 }, 00:26:38.789 "driver_specific": { 00:26:38.789 "lvol": { 00:26:38.789 "lvol_store_uuid": "d38fcbc9-594f-4f1c-8555-a09f64d2fb1c", 00:26:38.789 "base_bdev": "nvme0n1", 00:26:38.789 "thin_provision": true, 00:26:38.789 "num_allocated_clusters": 0, 00:26:38.789 "snapshot": false, 00:26:38.789 "clone": false, 00:26:38.789 "esnap_clone": false 00:26:38.789 } 00:26:38.789 } 00:26:38.789 } 00:26:38.789 ]' 00:26:38.789 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:38.789 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:38.789 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:38.789 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:38.789 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:38.789 13:21:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:26:38.789 13:21:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:26:38.789 13:21:25 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4247469d-b0b2-4a3d-b622-23eaf1116705 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:26:39.048 [2024-12-06 13:21:26.029284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.029343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:39.048 [2024-12-06 13:21:26.029387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:39.048 [2024-12-06 13:21:26.029401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.033267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.033311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.048 [2024-12-06 13:21:26.033331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.833 ms 00:26:39.048 [2024-12-06 13:21:26.033344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.033481] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:39.048 [2024-12-06 13:21:26.034451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:39.048 [2024-12-06 13:21:26.034498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.034514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.048 [2024-12-06 13:21:26.034529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:26:39.048 [2024-12-06 13:21:26.034542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.034787] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:26:39.048 [2024-12-06 13:21:26.036609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.036797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:39.048 [2024-12-06 13:21:26.036826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:39.048 [2024-12-06 13:21:26.036843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.046631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.046866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.048 [2024-12-06 13:21:26.046902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.685 ms 00:26:39.048 [2024-12-06 13:21:26.046922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.047156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.047184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.048 [2024-12-06 13:21:26.047199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:26:39.048 [2024-12-06 13:21:26.047220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.047275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.047302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:39.048 [2024-12-06 13:21:26.047318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:39.048 [2024-12-06 13:21:26.047343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.047393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:39.048 [2024-12-06 13:21:26.052690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.052732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.048 [2024-12-06 13:21:26.052771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.302 ms 00:26:39.048 [2024-12-06 13:21:26.052784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.052864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.052902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:39.048 [2024-12-06 13:21:26.052920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:39.048 [2024-12-06 13:21:26.052932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.052977] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:39.048 [2024-12-06 13:21:26.053163] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:39.048 [2024-12-06 13:21:26.053192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:39.048 [2024-12-06 13:21:26.053209] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:39.048 [2024-12-06 13:21:26.053251] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:39.048 [2024-12-06 13:21:26.053266] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:39.048 [2024-12-06 13:21:26.053282] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:39.048 [2024-12-06 13:21:26.053294] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:39.048 [2024-12-06 13:21:26.053311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:39.048 [2024-12-06 13:21:26.053326] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:39.048 [2024-12-06 13:21:26.053343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.053356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:39.048 [2024-12-06 13:21:26.053371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:26:39.048 [2024-12-06 13:21:26.053383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.053498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.048 [2024-12-06 13:21:26.053513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:39.048 [2024-12-06 13:21:26.053528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:39.048 [2024-12-06 13:21:26.053539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.048 [2024-12-06 13:21:26.053694] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:39.049 [2024-12-06 13:21:26.053721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:39.049 [2024-12-06 13:21:26.053739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.049 [2024-12-06 13:21:26.053751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.049 [2024-12-06 13:21:26.053767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:39.049 [2024-12-06 13:21:26.053778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:39.049 [2024-12-06 13:21:26.053792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:39.049 [2024-12-06 13:21:26.053804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:39.049 [2024-12-06 13:21:26.053817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:39.049 [2024-12-06 13:21:26.053828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.049 [2024-12-06 13:21:26.053844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:39.049 [2024-12-06 13:21:26.053856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:39.049 [2024-12-06 13:21:26.053870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.049 [2024-12-06 13:21:26.053881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:39.049 [2024-12-06 13:21:26.053896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:39.049 [2024-12-06 13:21:26.053907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.049 [2024-12-06 13:21:26.053923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:39.049 [2024-12-06 13:21:26.053935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:39.049 [2024-12-06 13:21:26.053949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.049 [2024-12-06 13:21:26.053960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:39.049 [2024-12-06 13:21:26.053983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:39.049 [2024-12-06 13:21:26.053995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.049 [2024-12-06 13:21:26.054009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:39.049 [2024-12-06 13:21:26.054021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:39.049 [2024-12-06 13:21:26.054034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.049 [2024-12-06 13:21:26.054046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:39.049 [2024-12-06 13:21:26.054060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:39.049 [2024-12-06 13:21:26.054071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.049 [2024-12-06 13:21:26.054085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:39.049 [2024-12-06 13:21:26.054096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:39.049 [2024-12-06 13:21:26.054110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.049 [2024-12-06 13:21:26.054122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:39.049 [2024-12-06 13:21:26.054154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:39.049 [2024-12-06 13:21:26.054166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.049 [2024-12-06 13:21:26.054181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:39.049 [2024-12-06 13:21:26.054193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:39.049 [2024-12-06 13:21:26.054209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.049 [2024-12-06 13:21:26.054221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:39.049 [2024-12-06 13:21:26.054247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:39.049 [2024-12-06 13:21:26.054261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.049 [2024-12-06 13:21:26.054276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:39.049 [2024-12-06 13:21:26.054287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:39.049 [2024-12-06 13:21:26.054301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.049 [2024-12-06 13:21:26.054313] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:39.049 [2024-12-06 13:21:26.054328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:39.049 [2024-12-06 13:21:26.054340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.049 [2024-12-06 13:21:26.054354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.049 [2024-12-06 13:21:26.054367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:39.049 [2024-12-06 13:21:26.054384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:39.049 [2024-12-06 13:21:26.054396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:39.049 [2024-12-06 13:21:26.054410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:39.049 [2024-12-06 13:21:26.054421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:39.049 [2024-12-06 13:21:26.054441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:39.049 [2024-12-06 13:21:26.054455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:39.049 [2024-12-06 13:21:26.054474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.049 [2024-12-06 13:21:26.054491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:39.049 [2024-12-06 13:21:26.054506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:39.049 [2024-12-06 13:21:26.054519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:39.049 [2024-12-06 13:21:26.054534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:39.049 [2024-12-06 13:21:26.054547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:39.049 [2024-12-06 13:21:26.054562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:39.049 [2024-12-06 13:21:26.054575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:39.049 [2024-12-06 13:21:26.054592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:39.049 [2024-12-06 13:21:26.054605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:39.049 [2024-12-06 13:21:26.054623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:39.049 [2024-12-06 13:21:26.054635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:39.049 [2024-12-06 13:21:26.054650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:39.049 [2024-12-06 13:21:26.054664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:39.049 [2024-12-06 13:21:26.054679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:39.049 [2024-12-06 13:21:26.054692] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:39.049 [2024-12-06 13:21:26.054712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.049 [2024-12-06 13:21:26.054726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:39.049 [2024-12-06 13:21:26.054742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:39.049 [2024-12-06 13:21:26.054754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:39.049 [2024-12-06 13:21:26.054770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:39.049 [2024-12-06 13:21:26.054784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.049 [2024-12-06 13:21:26.054799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:39.049 [2024-12-06 13:21:26.054811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:26:39.049 [2024-12-06 13:21:26.054826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.049 [2024-12-06 13:21:26.054921] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:39.049 [2024-12-06 13:21:26.054945] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:42.344 [2024-12-06 13:21:28.675307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.675418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:42.344 [2024-12-06 13:21:28.675441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2620.398 ms 00:26:42.344 [2024-12-06 13:21:28.675458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.713109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.713465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.344 [2024-12-06 13:21:28.713603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.340 ms 00:26:42.344 [2024-12-06 13:21:28.713661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.714020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.714185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:42.344 [2024-12-06 13:21:28.714231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:42.344 [2024-12-06 13:21:28.714266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.766094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.766198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.344 [2024-12-06 13:21:28.766245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.781 ms 00:26:42.344 [2024-12-06 13:21:28.766266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.766398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.766423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.344 [2024-12-06 13:21:28.766437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:42.344 [2024-12-06 13:21:28.766453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.767066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.767110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.344 [2024-12-06 13:21:28.767138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:26:42.344 [2024-12-06 13:21:28.767155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.767333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.767352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.344 [2024-12-06 13:21:28.767386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:26:42.344 [2024-12-06 13:21:28.767405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.788556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.788816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.344 [2024-12-06 13:21:28.788847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.109 ms 00:26:42.344 [2024-12-06 13:21:28.788864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.803372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:42.344 [2024-12-06 13:21:28.826076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.826164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:42.344 [2024-12-06 13:21:28.826190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.042 ms 00:26:42.344 [2024-12-06 13:21:28.826204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.907368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.907448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:42.344 [2024-12-06 13:21:28.907473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.974 ms 00:26:42.344 [2024-12-06 13:21:28.907486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.907745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.907765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:42.344 [2024-12-06 13:21:28.907785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:26:42.344 [2024-12-06 13:21:28.907797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.937167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.937211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:42.344 [2024-12-06 13:21:28.937248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.327 ms 00:26:42.344 [2024-12-06 13:21:28.937272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.965950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.965992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:42.344 [2024-12-06 13:21:28.966030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.590 ms 00:26:42.344 [2024-12-06 13:21:28.966042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:28.967008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:28.967238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:42.344 [2024-12-06 13:21:28.967272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:26:42.344 [2024-12-06 13:21:28.967287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:29.056335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:29.056399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:42.344 [2024-12-06 13:21:29.056442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.998 ms 00:26:42.344 [2024-12-06 13:21:29.056455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:29.087614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:29.087657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:42.344 [2024-12-06 13:21:29.087694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.033 ms 00:26:42.344 [2024-12-06 13:21:29.087707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:29.116832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:29.116875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:42.344 [2024-12-06 13:21:29.116911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.024 ms 00:26:42.344 [2024-12-06 13:21:29.116923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:29.146446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:29.146691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:42.344 [2024-12-06 13:21:29.146725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.430 ms 00:26:42.344 [2024-12-06 13:21:29.146739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:29.146916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:29.146940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:42.344 [2024-12-06 13:21:29.146960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:42.344 [2024-12-06 13:21:29.146973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:29.147071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.344 [2024-12-06 13:21:29.147088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:42.344 [2024-12-06 13:21:29.147104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:42.344 [2024-12-06 13:21:29.147117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.344 [2024-12-06 13:21:29.148333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:42.344 [2024-12-06 13:21:29.152281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3118.739 ms, result 0 00:26:42.344 [2024-12-06 13:21:29.153163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:26:42.344 "name": "ftl0", 00:26:42.344 "uuid": "971c63eb-6a00-4479-bdc9-d0eddd7420fb" 00:26:42.344 } 00:26:42.344 p_thread 00:26:42.344 13:21:29 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:26:42.344 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:26:42.344 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:42.344 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:26:42.344 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:42.344 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:42.344 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:42.624 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:26:42.883 [ 00:26:42.883 { 00:26:42.883 "name": "ftl0", 00:26:42.883 "aliases": [ 00:26:42.883 "971c63eb-6a00-4479-bdc9-d0eddd7420fb" 00:26:42.883 ], 00:26:42.883 "product_name": "FTL disk", 00:26:42.883 "block_size": 4096, 00:26:42.883 "num_blocks": 23592960, 00:26:42.883 "uuid": "971c63eb-6a00-4479-bdc9-d0eddd7420fb", 00:26:42.883 "assigned_rate_limits": { 00:26:42.883 "rw_ios_per_sec": 0, 00:26:42.883 "rw_mbytes_per_sec": 0, 00:26:42.883 "r_mbytes_per_sec": 0, 00:26:42.883 "w_mbytes_per_sec": 0 00:26:42.883 }, 00:26:42.883 "claimed": false, 00:26:42.883 "zoned": false, 00:26:42.883 "supported_io_types": { 00:26:42.883 "read": true, 00:26:42.883 "write": true, 00:26:42.883 "unmap": true, 00:26:42.883 "flush": true, 00:26:42.883 "reset": false, 00:26:42.883 "nvme_admin": false, 00:26:42.883 "nvme_io": false, 00:26:42.883 "nvme_io_md": false, 00:26:42.883 "write_zeroes": true, 00:26:42.883 "zcopy": false, 00:26:42.883 "get_zone_info": false, 00:26:42.883 "zone_management": false, 00:26:42.883 "zone_append": false, 00:26:42.883 "compare": false, 00:26:42.883 "compare_and_write": false, 00:26:42.883 "abort": false, 00:26:42.883 "seek_hole": false, 00:26:42.883 "seek_data": false, 00:26:42.883 "copy": false, 00:26:42.883 "nvme_iov_md": false 00:26:42.883 }, 00:26:42.883 "driver_specific": { 00:26:42.883 "ftl": { 00:26:42.883 "base_bdev": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:42.883 "cache": "nvc0n1p0" 00:26:42.883 } 00:26:42.883 } 00:26:42.883 } 00:26:42.883 ] 00:26:42.883 13:21:29 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:26:42.883 13:21:29 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:26:42.883 13:21:29 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:43.141 13:21:30 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:26:43.141 13:21:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:26:43.399 13:21:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:26:43.399 { 00:26:43.399 "name": "ftl0", 00:26:43.399 "aliases": [ 00:26:43.399 "971c63eb-6a00-4479-bdc9-d0eddd7420fb" 00:26:43.399 ], 00:26:43.399 "product_name": "FTL disk", 00:26:43.399 "block_size": 4096, 00:26:43.399 "num_blocks": 23592960, 00:26:43.399 "uuid": "971c63eb-6a00-4479-bdc9-d0eddd7420fb", 00:26:43.399 "assigned_rate_limits": { 00:26:43.399 "rw_ios_per_sec": 0, 00:26:43.399 "rw_mbytes_per_sec": 0, 00:26:43.399 "r_mbytes_per_sec": 0, 00:26:43.399 "w_mbytes_per_sec": 0 00:26:43.399 }, 00:26:43.399 "claimed": false, 00:26:43.399 "zoned": false, 00:26:43.399 "supported_io_types": { 00:26:43.399 "read": true, 00:26:43.399 "write": true, 00:26:43.399 "unmap": true, 00:26:43.399 "flush": true, 00:26:43.399 "reset": false, 00:26:43.399 "nvme_admin": false, 00:26:43.399 "nvme_io": false, 00:26:43.399 "nvme_io_md": false, 00:26:43.399 "write_zeroes": true, 00:26:43.399 "zcopy": false, 00:26:43.399 "get_zone_info": false, 00:26:43.399 "zone_management": false, 00:26:43.399 "zone_append": false, 00:26:43.399 "compare": false, 00:26:43.399 "compare_and_write": false, 00:26:43.399 "abort": false, 00:26:43.399 "seek_hole": false, 00:26:43.399 "seek_data": false, 00:26:43.399 "copy": false, 00:26:43.399 "nvme_iov_md": false 00:26:43.399 }, 00:26:43.399 "driver_specific": { 00:26:43.399 "ftl": { 00:26:43.399 "base_bdev": "4247469d-b0b2-4a3d-b622-23eaf1116705", 00:26:43.399 "cache": "nvc0n1p0" 00:26:43.399 } 00:26:43.399 } 00:26:43.399 } 00:26:43.399 ]' 00:26:43.399 13:21:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:26:43.657 13:21:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:26:43.657 13:21:30 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:43.917 [2024-12-06 13:21:30.684731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.684805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:43.917 [2024-12-06 13:21:30.684832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:43.917 [2024-12-06 13:21:30.684852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.684901] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:43.917 [2024-12-06 13:21:30.688573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.688748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:43.917 [2024-12-06 13:21:30.688789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.643 ms 00:26:43.917 [2024-12-06 13:21:30.688804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.689411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.689441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:43.917 [2024-12-06 13:21:30.689460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:26:43.917 [2024-12-06 13:21:30.689472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.693637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.693672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:43.917 [2024-12-06 13:21:30.693690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.125 ms 00:26:43.917 [2024-12-06 13:21:30.693702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.701191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.701224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:43.917 [2024-12-06 13:21:30.701261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.425 ms 00:26:43.917 [2024-12-06 13:21:30.701273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.732568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.732616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:43.917 [2024-12-06 13:21:30.732659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.190 ms 00:26:43.917 [2024-12-06 13:21:30.732672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.751692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.751739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:43.917 [2024-12-06 13:21:30.751779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.918 ms 00:26:43.917 [2024-12-06 13:21:30.751795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.752044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.752066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:43.917 [2024-12-06 13:21:30.752084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:26:43.917 [2024-12-06 13:21:30.752097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.782921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.782966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:43.917 [2024-12-06 13:21:30.783004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.752 ms 00:26:43.917 [2024-12-06 13:21:30.783017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.813363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.813548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:43.917 [2024-12-06 13:21:30.813586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.239 ms 00:26:43.917 [2024-12-06 13:21:30.813601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.843573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.843747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:43.917 [2024-12-06 13:21:30.843783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.830 ms 00:26:43.917 [2024-12-06 13:21:30.843797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.873943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.917 [2024-12-06 13:21:30.873994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:43.917 [2024-12-06 13:21:30.874018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.954 ms 00:26:43.917 [2024-12-06 13:21:30.874031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.917 [2024-12-06 13:21:30.874156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:43.917 [2024-12-06 13:21:30.874184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:43.917 [2024-12-06 13:21:30.874424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.874987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:43.918 [2024-12-06 13:21:30.875707] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:43.918 [2024-12-06 13:21:30.875726] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:26:43.918 [2024-12-06 13:21:30.875739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:43.918 [2024-12-06 13:21:30.875754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:43.918 [2024-12-06 13:21:30.875766] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:43.918 [2024-12-06 13:21:30.875784] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:43.919 [2024-12-06 13:21:30.875796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:43.919 [2024-12-06 13:21:30.875811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:43.919 [2024-12-06 13:21:30.875823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:43.919 [2024-12-06 13:21:30.875836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:43.919 [2024-12-06 13:21:30.875847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:43.919 [2024-12-06 13:21:30.875862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.919 [2024-12-06 13:21:30.875875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:43.919 [2024-12-06 13:21:30.875891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:26:43.919 [2024-12-06 13:21:30.875903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.919 [2024-12-06 13:21:30.892999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.919 [2024-12-06 13:21:30.893056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:43.919 [2024-12-06 13:21:30.893081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.040 ms 00:26:43.919 [2024-12-06 13:21:30.893094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.919 [2024-12-06 13:21:30.893665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.919 [2024-12-06 13:21:30.893699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:43.919 [2024-12-06 13:21:30.893718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:26:43.919 [2024-12-06 13:21:30.893731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:30.953351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:30.953412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.177 [2024-12-06 13:21:30.953436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:30.953450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:30.953630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:30.953649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.177 [2024-12-06 13:21:30.953666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:30.953679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:30.953770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:30.953791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.177 [2024-12-06 13:21:30.953813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:30.953826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:30.953863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:30.953877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.177 [2024-12-06 13:21:30.953893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:30.953905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.068924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.068990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:44.177 [2024-12-06 13:21:31.069030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.069043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.153898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.154213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.177 [2024-12-06 13:21:31.154277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.154293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.154456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.154476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.177 [2024-12-06 13:21:31.154501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.154520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.154588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.154603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.177 [2024-12-06 13:21:31.154618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.154630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.154798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.154819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.177 [2024-12-06 13:21:31.154836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.154851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.154930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.154949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:44.177 [2024-12-06 13:21:31.154965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.154976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.155044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.155059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.177 [2024-12-06 13:21:31.155078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.155090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.155187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.177 [2024-12-06 13:21:31.155207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.177 [2024-12-06 13:21:31.155223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.177 [2024-12-06 13:21:31.155236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.177 [2024-12-06 13:21:31.155465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.718 ms, result 0 00:26:44.177 true 00:26:44.177 13:21:31 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78290 00:26:44.177 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78290 ']' 00:26:44.178 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78290 00:26:44.178 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:26:44.178 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.178 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78290 00:26:44.435 killing process with pid 78290 00:26:44.435 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:44.435 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:44.435 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78290' 00:26:44.435 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78290 00:26:44.435 13:21:31 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78290 00:26:49.711 13:21:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:26:50.293 65536+0 records in 00:26:50.294 65536+0 records out 00:26:50.294 268435456 bytes (268 MB, 256 MiB) copied, 1.26131 s, 213 MB/s 00:26:50.294 13:21:37 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:50.552 [2024-12-06 13:21:37.324718] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:50.552 [2024-12-06 13:21:37.324921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78498 ] 00:26:50.552 [2024-12-06 13:21:37.509081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.810 [2024-12-06 13:21:37.659415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.069 [2024-12-06 13:21:38.026460] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.069 [2024-12-06 13:21:38.026757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.328 [2024-12-06 13:21:38.196051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.196386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:51.328 [2024-12-06 13:21:38.196420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:51.328 [2024-12-06 13:21:38.196434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.328 [2024-12-06 13:21:38.200413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.200456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:51.328 [2024-12-06 13:21:38.200488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.942 ms 00:26:51.328 [2024-12-06 13:21:38.200499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.328 [2024-12-06 13:21:38.200647] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:51.328 [2024-12-06 13:21:38.201601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:51.328 [2024-12-06 13:21:38.201673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.201704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:51.328 [2024-12-06 13:21:38.201716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:26:51.328 [2024-12-06 13:21:38.201728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.328 [2024-12-06 13:21:38.204064] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:51.328 [2024-12-06 13:21:38.220416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.220458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:51.328 [2024-12-06 13:21:38.220492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.354 ms 00:26:51.328 [2024-12-06 13:21:38.220504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.328 [2024-12-06 13:21:38.220663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.220685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:51.328 [2024-12-06 13:21:38.220699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:51.328 [2024-12-06 13:21:38.220710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.328 [2024-12-06 13:21:38.229443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.229491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:51.328 [2024-12-06 13:21:38.229524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.670 ms 00:26:51.328 [2024-12-06 13:21:38.229535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.328 [2024-12-06 13:21:38.229688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.229709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:51.328 [2024-12-06 13:21:38.229722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:51.328 [2024-12-06 13:21:38.229734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.328 [2024-12-06 13:21:38.229783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.328 [2024-12-06 13:21:38.229799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:51.328 [2024-12-06 13:21:38.229810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:51.328 [2024-12-06 13:21:38.229822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.329 [2024-12-06 13:21:38.229854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:51.329 [2024-12-06 13:21:38.234648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.329 [2024-12-06 13:21:38.234685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:51.329 [2024-12-06 13:21:38.234717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.804 ms 00:26:51.329 [2024-12-06 13:21:38.234727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.329 [2024-12-06 13:21:38.234813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.329 [2024-12-06 13:21:38.234832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:51.329 [2024-12-06 13:21:38.234844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:51.329 [2024-12-06 13:21:38.234855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.329 [2024-12-06 13:21:38.234891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:51.329 [2024-12-06 13:21:38.234920] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:51.329 [2024-12-06 13:21:38.234959] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:51.329 [2024-12-06 13:21:38.234978] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:51.329 [2024-12-06 13:21:38.235077] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:51.329 [2024-12-06 13:21:38.235092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:51.329 [2024-12-06 13:21:38.235106] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:51.329 [2024-12-06 13:21:38.235125] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235137] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235193] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:51.329 [2024-12-06 13:21:38.235204] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:51.329 [2024-12-06 13:21:38.235215] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:51.329 [2024-12-06 13:21:38.235225] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:51.329 [2024-12-06 13:21:38.235237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.329 [2024-12-06 13:21:38.235248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:51.329 [2024-12-06 13:21:38.235276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:26:51.329 [2024-12-06 13:21:38.235287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.329 [2024-12-06 13:21:38.235386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.329 [2024-12-06 13:21:38.235407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:51.329 [2024-12-06 13:21:38.235420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:51.329 [2024-12-06 13:21:38.235431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.329 [2024-12-06 13:21:38.235559] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:51.329 [2024-12-06 13:21:38.235576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:51.329 [2024-12-06 13:21:38.235594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:51.329 [2024-12-06 13:21:38.235629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:51.329 [2024-12-06 13:21:38.235661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.329 [2024-12-06 13:21:38.235682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:51.329 [2024-12-06 13:21:38.235708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:51.329 [2024-12-06 13:21:38.235718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.329 [2024-12-06 13:21:38.235729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:51.329 [2024-12-06 13:21:38.235742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:51.329 [2024-12-06 13:21:38.235753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:51.329 [2024-12-06 13:21:38.235776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:51.329 [2024-12-06 13:21:38.235808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:51.329 [2024-12-06 13:21:38.235839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:51.329 [2024-12-06 13:21:38.235871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:51.329 [2024-12-06 13:21:38.235903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.329 [2024-12-06 13:21:38.235925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:51.329 [2024-12-06 13:21:38.235935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:51.329 [2024-12-06 13:21:38.235946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.329 [2024-12-06 13:21:38.235957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:51.329 [2024-12-06 13:21:38.235968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:51.329 [2024-12-06 13:21:38.235979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.329 [2024-12-06 13:21:38.235989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:51.329 [2024-12-06 13:21:38.236000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:51.329 [2024-12-06 13:21:38.236010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.329 [2024-12-06 13:21:38.236020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:51.329 [2024-12-06 13:21:38.236030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:51.329 [2024-12-06 13:21:38.236040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.329 [2024-12-06 13:21:38.236051] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:51.329 [2024-12-06 13:21:38.236062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:51.329 [2024-12-06 13:21:38.236080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.329 [2024-12-06 13:21:38.236093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.329 [2024-12-06 13:21:38.236105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:51.329 [2024-12-06 13:21:38.236116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:51.329 [2024-12-06 13:21:38.236126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:51.329 [2024-12-06 13:21:38.236137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:51.329 [2024-12-06 13:21:38.236148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:51.329 [2024-12-06 13:21:38.236159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:51.329 [2024-12-06 13:21:38.236185] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:51.329 [2024-12-06 13:21:38.236202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.329 [2024-12-06 13:21:38.236215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:51.329 [2024-12-06 13:21:38.236226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:51.329 [2024-12-06 13:21:38.236237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:51.329 [2024-12-06 13:21:38.236249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:51.329 [2024-12-06 13:21:38.236260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:51.329 [2024-12-06 13:21:38.236272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:51.329 [2024-12-06 13:21:38.236283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:51.329 [2024-12-06 13:21:38.236295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:51.329 [2024-12-06 13:21:38.236307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:51.329 [2024-12-06 13:21:38.236318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:51.329 [2024-12-06 13:21:38.236330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:51.329 [2024-12-06 13:21:38.236341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:51.329 [2024-12-06 13:21:38.236352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:51.329 [2024-12-06 13:21:38.236363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:51.330 [2024-12-06 13:21:38.236375] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:51.330 [2024-12-06 13:21:38.236387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.330 [2024-12-06 13:21:38.236400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:51.330 [2024-12-06 13:21:38.236411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:51.330 [2024-12-06 13:21:38.236423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:51.330 [2024-12-06 13:21:38.236434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:51.330 [2024-12-06 13:21:38.236447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.330 [2024-12-06 13:21:38.236466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:51.330 [2024-12-06 13:21:38.236478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:26:51.330 [2024-12-06 13:21:38.236490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.330 [2024-12-06 13:21:38.278402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.330 [2024-12-06 13:21:38.278471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.330 [2024-12-06 13:21:38.278493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.831 ms 00:26:51.330 [2024-12-06 13:21:38.278506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.330 [2024-12-06 13:21:38.278726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.330 [2024-12-06 13:21:38.278747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:51.330 [2024-12-06 13:21:38.278762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:51.330 [2024-12-06 13:21:38.278774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.330 [2024-12-06 13:21:38.332813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.330 [2024-12-06 13:21:38.333067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.330 [2024-12-06 13:21:38.333235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.004 ms 00:26:51.330 [2024-12-06 13:21:38.333376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.330 [2024-12-06 13:21:38.333604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.330 [2024-12-06 13:21:38.333664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.330 [2024-12-06 13:21:38.333770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:51.330 [2024-12-06 13:21:38.333821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.330 [2024-12-06 13:21:38.334608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.330 [2024-12-06 13:21:38.334646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.330 [2024-12-06 13:21:38.334677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:26:51.330 [2024-12-06 13:21:38.334690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.330 [2024-12-06 13:21:38.334867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.330 [2024-12-06 13:21:38.334887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.330 [2024-12-06 13:21:38.334900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:26:51.330 [2024-12-06 13:21:38.334911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.588 [2024-12-06 13:21:38.355411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.588 [2024-12-06 13:21:38.355462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.588 [2024-12-06 13:21:38.355497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.468 ms 00:26:51.588 [2024-12-06 13:21:38.355510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.588 [2024-12-06 13:21:38.372357] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:51.588 [2024-12-06 13:21:38.372562] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:51.588 [2024-12-06 13:21:38.372589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.588 [2024-12-06 13:21:38.372603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:51.588 [2024-12-06 13:21:38.372617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.897 ms 00:26:51.588 [2024-12-06 13:21:38.372628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.588 [2024-12-06 13:21:38.400898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.588 [2024-12-06 13:21:38.401207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:51.588 [2024-12-06 13:21:38.401241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.161 ms 00:26:51.588 [2024-12-06 13:21:38.401255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.588 [2024-12-06 13:21:38.417703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.588 [2024-12-06 13:21:38.417767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:51.588 [2024-12-06 13:21:38.417803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.290 ms 00:26:51.589 [2024-12-06 13:21:38.417815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.432952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.433009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:51.589 [2024-12-06 13:21:38.433043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.007 ms 00:26:51.589 [2024-12-06 13:21:38.433055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.434094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.434183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:51.589 [2024-12-06 13:21:38.434218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:26:51.589 [2024-12-06 13:21:38.434231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.509799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.510063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:51.589 [2024-12-06 13:21:38.510096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.484 ms 00:26:51.589 [2024-12-06 13:21:38.510110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.522049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:51.589 [2024-12-06 13:21:38.543289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.543357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:51.589 [2024-12-06 13:21:38.543393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.990 ms 00:26:51.589 [2024-12-06 13:21:38.543405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.543597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.543618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:51.589 [2024-12-06 13:21:38.543631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:51.589 [2024-12-06 13:21:38.543642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.543715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.543731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:51.589 [2024-12-06 13:21:38.543743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:51.589 [2024-12-06 13:21:38.543754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.543814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.543838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:51.589 [2024-12-06 13:21:38.543851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:51.589 [2024-12-06 13:21:38.543862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.543912] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:51.589 [2024-12-06 13:21:38.543930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.543941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:51.589 [2024-12-06 13:21:38.543953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:51.589 [2024-12-06 13:21:38.543964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.573595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.573640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:51.589 [2024-12-06 13:21:38.573674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.605 ms 00:26:51.589 [2024-12-06 13:21:38.573687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.573813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.589 [2024-12-06 13:21:38.573833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:51.589 [2024-12-06 13:21:38.573846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:51.589 [2024-12-06 13:21:38.573857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.589 [2024-12-06 13:21:38.575346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:51.589 [2024-12-06 13:21:38.579448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.807 ms, result 0 00:26:51.589 [2024-12-06 13:21:38.580360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:51.589 [2024-12-06 13:21:38.595812] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:52.962  [2024-12-06T13:21:40.912Z] Copying: 22/256 [MB] (22 MBps) [2024-12-06T13:21:41.848Z] Copying: 45/256 [MB] (22 MBps) [2024-12-06T13:21:42.782Z] Copying: 68/256 [MB] (23 MBps) [2024-12-06T13:21:43.718Z] Copying: 91/256 [MB] (22 MBps) [2024-12-06T13:21:44.652Z] Copying: 114/256 [MB] (22 MBps) [2024-12-06T13:21:45.635Z] Copying: 137/256 [MB] (23 MBps) [2024-12-06T13:21:47.007Z] Copying: 160/256 [MB] (22 MBps) [2024-12-06T13:21:47.941Z] Copying: 183/256 [MB] (23 MBps) [2024-12-06T13:21:48.875Z] Copying: 206/256 [MB] (23 MBps) [2024-12-06T13:21:49.809Z] Copying: 230/256 [MB] (23 MBps) [2024-12-06T13:21:49.809Z] Copying: 253/256 [MB] (22 MBps) [2024-12-06T13:21:49.809Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-06 13:21:49.702184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:02.793 [2024-12-06 13:21:49.714457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.714512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:02.793 [2024-12-06 13:21:49.714548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:02.793 [2024-12-06 13:21:49.714569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.793 [2024-12-06 13:21:49.714601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:02.793 [2024-12-06 13:21:49.718157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.718187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:02.793 [2024-12-06 13:21:49.718217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.536 ms 00:27:02.793 [2024-12-06 13:21:49.718228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.793 [2024-12-06 13:21:49.720062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.720104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:02.793 [2024-12-06 13:21:49.720136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:27:02.793 [2024-12-06 13:21:49.720181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.793 [2024-12-06 13:21:49.727891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.727940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:02.793 [2024-12-06 13:21:49.727971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.683 ms 00:27:02.793 [2024-12-06 13:21:49.727983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.793 [2024-12-06 13:21:49.734980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.735221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:02.793 [2024-12-06 13:21:49.735249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:27:02.793 [2024-12-06 13:21:49.735261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.793 [2024-12-06 13:21:49.764514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.764556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:02.793 [2024-12-06 13:21:49.764590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.192 ms 00:27:02.793 [2024-12-06 13:21:49.764601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.793 [2024-12-06 13:21:49.781917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.781966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:02.793 [2024-12-06 13:21:49.782003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.252 ms 00:27:02.793 [2024-12-06 13:21:49.782015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.793 [2024-12-06 13:21:49.782212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.793 [2024-12-06 13:21:49.782234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:02.793 [2024-12-06 13:21:49.782274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:27:02.793 [2024-12-06 13:21:49.782314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.051 [2024-12-06 13:21:49.811738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.051 [2024-12-06 13:21:49.811782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:03.051 [2024-12-06 13:21:49.811815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.397 ms 00:27:03.051 [2024-12-06 13:21:49.811826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.051 [2024-12-06 13:21:49.840804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.051 [2024-12-06 13:21:49.840845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:03.051 [2024-12-06 13:21:49.840878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.912 ms 00:27:03.051 [2024-12-06 13:21:49.840889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.051 [2024-12-06 13:21:49.870293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.051 [2024-12-06 13:21:49.870483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:03.051 [2024-12-06 13:21:49.870511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.342 ms 00:27:03.051 [2024-12-06 13:21:49.870525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.051 [2024-12-06 13:21:49.899269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.051 [2024-12-06 13:21:49.899312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:03.051 [2024-12-06 13:21:49.899356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.612 ms 00:27:03.051 [2024-12-06 13:21:49.899367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.051 [2024-12-06 13:21:49.899430] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:03.051 [2024-12-06 13:21:49.899455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:03.051 [2024-12-06 13:21:49.899738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.899999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:03.052 [2024-12-06 13:21:49.900637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:03.053 [2024-12-06 13:21:49.900665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:03.053 [2024-12-06 13:21:49.900677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:03.053 [2024-12-06 13:21:49.900689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:03.053 [2024-12-06 13:21:49.900708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:03.053 [2024-12-06 13:21:49.900720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:03.053 [2024-12-06 13:21:49.900733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:03.053 [2024-12-06 13:21:49.900754] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:03.053 [2024-12-06 13:21:49.900766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:27:03.053 [2024-12-06 13:21:49.900777] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:03.053 [2024-12-06 13:21:49.900789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:03.053 [2024-12-06 13:21:49.900800] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:03.053 [2024-12-06 13:21:49.900811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:03.053 [2024-12-06 13:21:49.900822] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:03.053 [2024-12-06 13:21:49.900834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:03.053 [2024-12-06 13:21:49.900845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:03.053 [2024-12-06 13:21:49.900856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:03.053 [2024-12-06 13:21:49.900867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:03.053 [2024-12-06 13:21:49.900880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.053 [2024-12-06 13:21:49.900897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:03.053 [2024-12-06 13:21:49.900911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:27:03.053 [2024-12-06 13:21:49.900923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.053 [2024-12-06 13:21:49.917519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.053 [2024-12-06 13:21:49.917560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:03.053 [2024-12-06 13:21:49.917594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.569 ms 00:27:03.053 [2024-12-06 13:21:49.917615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.053 [2024-12-06 13:21:49.918087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.053 [2024-12-06 13:21:49.918110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:03.053 [2024-12-06 13:21:49.918143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:27:03.053 [2024-12-06 13:21:49.918174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.053 [2024-12-06 13:21:49.963968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.053 [2024-12-06 13:21:49.964035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:03.053 [2024-12-06 13:21:49.964069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.053 [2024-12-06 13:21:49.964081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.053 [2024-12-06 13:21:49.964291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.053 [2024-12-06 13:21:49.964312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:03.053 [2024-12-06 13:21:49.964325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.053 [2024-12-06 13:21:49.964337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.053 [2024-12-06 13:21:49.964406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.053 [2024-12-06 13:21:49.964426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:03.053 [2024-12-06 13:21:49.964438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.053 [2024-12-06 13:21:49.964450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.053 [2024-12-06 13:21:49.964476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.053 [2024-12-06 13:21:49.964504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:03.053 [2024-12-06 13:21:49.964517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.053 [2024-12-06 13:21:49.964528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.310 [2024-12-06 13:21:50.071919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.310 [2024-12-06 13:21:50.071997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:03.310 [2024-12-06 13:21:50.072033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.310 [2024-12-06 13:21:50.072046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.310 [2024-12-06 13:21:50.155670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.310 [2024-12-06 13:21:50.155747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:03.310 [2024-12-06 13:21:50.155784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.310 [2024-12-06 13:21:50.155808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.310 [2024-12-06 13:21:50.155896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.311 [2024-12-06 13:21:50.155914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:03.311 [2024-12-06 13:21:50.155926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.311 [2024-12-06 13:21:50.155938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.311 [2024-12-06 13:21:50.155974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.311 [2024-12-06 13:21:50.155988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:03.311 [2024-12-06 13:21:50.156023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.311 [2024-12-06 13:21:50.156035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.311 [2024-12-06 13:21:50.156162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.311 [2024-12-06 13:21:50.156197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:03.311 [2024-12-06 13:21:50.156214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.311 [2024-12-06 13:21:50.156226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.311 [2024-12-06 13:21:50.156282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.311 [2024-12-06 13:21:50.156301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:03.311 [2024-12-06 13:21:50.156313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.311 [2024-12-06 13:21:50.156331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.311 [2024-12-06 13:21:50.156381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.311 [2024-12-06 13:21:50.156396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:03.311 [2024-12-06 13:21:50.156408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.311 [2024-12-06 13:21:50.156420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.311 [2024-12-06 13:21:50.156474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.311 [2024-12-06 13:21:50.156490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:03.311 [2024-12-06 13:21:50.156509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.311 [2024-12-06 13:21:50.156520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.311 [2024-12-06 13:21:50.156695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.226 ms, result 0 00:27:04.352 00:27:04.352 00:27:04.352 13:21:51 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:04.352 13:21:51 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78641 00:27:04.352 13:21:51 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78641 00:27:04.352 13:21:51 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78641 ']' 00:27:04.352 13:21:51 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.352 13:21:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.352 13:21:51 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.352 13:21:51 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.352 13:21:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:04.610 [2024-12-06 13:21:51.436815] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:27:04.610 [2024-12-06 13:21:51.437329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78641 ] 00:27:04.610 [2024-12-06 13:21:51.621631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.867 [2024-12-06 13:21:51.742834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.799 13:21:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.799 13:21:52 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:05.799 13:21:52 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:06.057 [2024-12-06 13:21:52.854935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.057 [2024-12-06 13:21:52.855040] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.057 [2024-12-06 13:21:53.050171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.057 [2024-12-06 13:21:53.050269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:06.057 [2024-12-06 13:21:53.050301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:06.057 [2024-12-06 13:21:53.050316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.057 [2024-12-06 13:21:53.054181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.057 [2024-12-06 13:21:53.054229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:06.057 [2024-12-06 13:21:53.054304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.828 ms 00:27:06.057 [2024-12-06 13:21:53.054317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.057 [2024-12-06 13:21:53.054457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:06.057 [2024-12-06 13:21:53.055457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:06.057 [2024-12-06 13:21:53.055516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.057 [2024-12-06 13:21:53.055549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:06.057 [2024-12-06 13:21:53.055564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:27:06.057 [2024-12-06 13:21:53.055589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.057 [2024-12-06 13:21:53.057914] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:06.317 [2024-12-06 13:21:53.074673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.317 [2024-12-06 13:21:53.074906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:06.317 [2024-12-06 13:21:53.075040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.770 ms 00:27:06.317 [2024-12-06 13:21:53.075097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.317 [2024-12-06 13:21:53.075362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.317 [2024-12-06 13:21:53.075552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:06.317 [2024-12-06 13:21:53.075590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:06.318 [2024-12-06 13:21:53.075607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.084798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.318 [2024-12-06 13:21:53.084872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:06.318 [2024-12-06 13:21:53.084891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.119 ms 00:27:06.318 [2024-12-06 13:21:53.084906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.085064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.318 [2024-12-06 13:21:53.085090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:06.318 [2024-12-06 13:21:53.085104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:06.318 [2024-12-06 13:21:53.085123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.085229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.318 [2024-12-06 13:21:53.085252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:06.318 [2024-12-06 13:21:53.085267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:06.318 [2024-12-06 13:21:53.085282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.085343] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:06.318 [2024-12-06 13:21:53.090278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.318 [2024-12-06 13:21:53.090465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:06.318 [2024-12-06 13:21:53.090500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:27:06.318 [2024-12-06 13:21:53.090525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.090604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.318 [2024-12-06 13:21:53.090622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:06.318 [2024-12-06 13:21:53.090638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:06.318 [2024-12-06 13:21:53.090654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.090689] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:06.318 [2024-12-06 13:21:53.090719] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:06.318 [2024-12-06 13:21:53.090773] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:06.318 [2024-12-06 13:21:53.090799] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:06.318 [2024-12-06 13:21:53.090914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:06.318 [2024-12-06 13:21:53.090931] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:06.318 [2024-12-06 13:21:53.090973] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:06.318 [2024-12-06 13:21:53.090991] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091036] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091050] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:06.318 [2024-12-06 13:21:53.091091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:06.318 [2024-12-06 13:21:53.091102] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:06.318 [2024-12-06 13:21:53.091145] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:06.318 [2024-12-06 13:21:53.091180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.318 [2024-12-06 13:21:53.091199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:06.318 [2024-12-06 13:21:53.091213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:27:06.318 [2024-12-06 13:21:53.091244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.091344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.318 [2024-12-06 13:21:53.091366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:06.318 [2024-12-06 13:21:53.091380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:06.318 [2024-12-06 13:21:53.091397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.318 [2024-12-06 13:21:53.091526] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:06.318 [2024-12-06 13:21:53.091550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:06.318 [2024-12-06 13:21:53.091564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:06.318 [2024-12-06 13:21:53.091611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:06.318 [2024-12-06 13:21:53.091659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.318 [2024-12-06 13:21:53.091683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:06.318 [2024-12-06 13:21:53.091696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:06.318 [2024-12-06 13:21:53.091707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.318 [2024-12-06 13:21:53.091720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:06.318 [2024-12-06 13:21:53.091731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:06.318 [2024-12-06 13:21:53.091744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:06.318 [2024-12-06 13:21:53.091768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:06.318 [2024-12-06 13:21:53.091818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:06.318 [2024-12-06 13:21:53.091858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:06.318 [2024-12-06 13:21:53.091892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:06.318 [2024-12-06 13:21:53.091929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.318 [2024-12-06 13:21:53.091955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:06.318 [2024-12-06 13:21:53.091967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:06.318 [2024-12-06 13:21:53.091980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.318 [2024-12-06 13:21:53.091990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:06.318 [2024-12-06 13:21:53.092003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:06.318 [2024-12-06 13:21:53.092014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.318 [2024-12-06 13:21:53.092028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:06.318 [2024-12-06 13:21:53.092042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:06.318 [2024-12-06 13:21:53.092057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.318 [2024-12-06 13:21:53.092068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:06.318 [2024-12-06 13:21:53.092081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:06.318 [2024-12-06 13:21:53.092092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.318 [2024-12-06 13:21:53.092107] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:06.318 [2024-12-06 13:21:53.092121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:06.318 [2024-12-06 13:21:53.092150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.318 [2024-12-06 13:21:53.092162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.318 [2024-12-06 13:21:53.092177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:06.318 [2024-12-06 13:21:53.092188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:06.318 [2024-12-06 13:21:53.092204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:06.318 [2024-12-06 13:21:53.092215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:06.318 [2024-12-06 13:21:53.092240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:06.318 [2024-12-06 13:21:53.092252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:06.318 [2024-12-06 13:21:53.092267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:06.318 [2024-12-06 13:21:53.092282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.318 [2024-12-06 13:21:53.092302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:06.318 [2024-12-06 13:21:53.092314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:06.318 [2024-12-06 13:21:53.092328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:06.318 [2024-12-06 13:21:53.092340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:06.319 [2024-12-06 13:21:53.092354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:06.319 [2024-12-06 13:21:53.092365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:06.319 [2024-12-06 13:21:53.092379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:06.319 [2024-12-06 13:21:53.092391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:06.319 [2024-12-06 13:21:53.092410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:06.319 [2024-12-06 13:21:53.092429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:06.319 [2024-12-06 13:21:53.092446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:06.319 [2024-12-06 13:21:53.092459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:06.319 [2024-12-06 13:21:53.092475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:06.319 [2024-12-06 13:21:53.092488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:06.319 [2024-12-06 13:21:53.092505] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:06.319 [2024-12-06 13:21:53.092519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.319 [2024-12-06 13:21:53.092543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:06.319 [2024-12-06 13:21:53.092572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:06.319 [2024-12-06 13:21:53.092590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:06.319 [2024-12-06 13:21:53.092603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:06.319 [2024-12-06 13:21:53.092622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.092635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:06.319 [2024-12-06 13:21:53.092653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:27:06.319 [2024-12-06 13:21:53.092670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.134165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.134224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:06.319 [2024-12-06 13:21:53.134273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.402 ms 00:27:06.319 [2024-12-06 13:21:53.134295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.134499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.134521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:06.319 [2024-12-06 13:21:53.134542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:06.319 [2024-12-06 13:21:53.134556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.180184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.180261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:06.319 [2024-12-06 13:21:53.180288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.530 ms 00:27:06.319 [2024-12-06 13:21:53.180303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.180467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.180487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:06.319 [2024-12-06 13:21:53.180508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:06.319 [2024-12-06 13:21:53.180521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.181116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.181171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:06.319 [2024-12-06 13:21:53.181202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:27:06.319 [2024-12-06 13:21:53.181216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.181406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.181425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:06.319 [2024-12-06 13:21:53.181445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:27:06.319 [2024-12-06 13:21:53.181458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.204217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.204268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:06.319 [2024-12-06 13:21:53.204310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.704 ms 00:27:06.319 [2024-12-06 13:21:53.204324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.233707] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:06.319 [2024-12-06 13:21:53.233756] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:06.319 [2024-12-06 13:21:53.233786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.233801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:06.319 [2024-12-06 13:21:53.233822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.265 ms 00:27:06.319 [2024-12-06 13:21:53.233851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.263489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.263533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:06.319 [2024-12-06 13:21:53.263575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.527 ms 00:27:06.319 [2024-12-06 13:21:53.263603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.278591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.278791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:06.319 [2024-12-06 13:21:53.278838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.872 ms 00:27:06.319 [2024-12-06 13:21:53.278854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.293303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.293350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:06.319 [2024-12-06 13:21:53.293390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.347 ms 00:27:06.319 [2024-12-06 13:21:53.293403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.319 [2024-12-06 13:21:53.294390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.319 [2024-12-06 13:21:53.294430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:06.319 [2024-12-06 13:21:53.294455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:27:06.319 [2024-12-06 13:21:53.294469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.369325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.369406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:06.578 [2024-12-06 13:21:53.369452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.814 ms 00:27:06.578 [2024-12-06 13:21:53.369467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.382315] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:06.578 [2024-12-06 13:21:53.404671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.404790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:06.578 [2024-12-06 13:21:53.404821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.036 ms 00:27:06.578 [2024-12-06 13:21:53.404840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.405052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.405079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:06.578 [2024-12-06 13:21:53.405094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:06.578 [2024-12-06 13:21:53.405111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.405249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.405277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:06.578 [2024-12-06 13:21:53.405308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:06.578 [2024-12-06 13:21:53.405335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.405373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.405397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:06.578 [2024-12-06 13:21:53.405411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:06.578 [2024-12-06 13:21:53.405429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.405484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:06.578 [2024-12-06 13:21:53.405513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.405532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:06.578 [2024-12-06 13:21:53.405551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:06.578 [2024-12-06 13:21:53.405565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.435605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.435649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:06.578 [2024-12-06 13:21:53.435690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.988 ms 00:27:06.578 [2024-12-06 13:21:53.435704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.435838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.578 [2024-12-06 13:21:53.435859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:06.578 [2024-12-06 13:21:53.435878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:06.578 [2024-12-06 13:21:53.435897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.578 [2024-12-06 13:21:53.437298] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:06.578 [2024-12-06 13:21:53.441148] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.689 ms, result 0 00:27:06.578 [2024-12-06 13:21:53.442411] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:06.578 Some configs were skipped because the RPC state that can call them passed over. 00:27:06.578 13:21:53 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:06.837 [2024-12-06 13:21:53.759871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.837 [2024-12-06 13:21:53.760211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:06.837 [2024-12-06 13:21:53.760389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:27:06.837 [2024-12-06 13:21:53.760458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.837 [2024-12-06 13:21:53.760640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.624 ms, result 0 00:27:06.837 true 00:27:06.837 13:21:53 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:07.097 [2024-12-06 13:21:54.051871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.097 [2024-12-06 13:21:54.052188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:07.097 [2024-12-06 13:21:54.052343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:27:07.097 [2024-12-06 13:21:54.052399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.097 [2024-12-06 13:21:54.052606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.129 ms, result 0 00:27:07.097 true 00:27:07.097 13:21:54 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78641 00:27:07.097 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78641 ']' 00:27:07.097 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78641 00:27:07.097 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:07.097 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.097 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78641 00:27:07.356 killing process with pid 78641 00:27:07.356 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.356 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.356 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78641' 00:27:07.356 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78641 00:27:07.356 13:21:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78641 00:27:08.339 [2024-12-06 13:21:55.113652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.339 [2024-12-06 13:21:55.113752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:08.340 [2024-12-06 13:21:55.113773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:08.340 [2024-12-06 13:21:55.113788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.113823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:08.340 [2024-12-06 13:21:55.117376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.117409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:08.340 [2024-12-06 13:21:55.117428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:27:08.340 [2024-12-06 13:21:55.117439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.117771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.117790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:08.340 [2024-12-06 13:21:55.117805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:27:08.340 [2024-12-06 13:21:55.117816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.121781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.121823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:08.340 [2024-12-06 13:21:55.121864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:27:08.340 [2024-12-06 13:21:55.121892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.128665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.128909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:08.340 [2024-12-06 13:21:55.128954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.709 ms 00:27:08.340 [2024-12-06 13:21:55.128968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.141213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.141299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:08.340 [2024-12-06 13:21:55.141342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.128 ms 00:27:08.340 [2024-12-06 13:21:55.141354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.150567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.150876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:08.340 [2024-12-06 13:21:55.150915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.106 ms 00:27:08.340 [2024-12-06 13:21:55.150929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.151163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.151186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:08.340 [2024-12-06 13:21:55.151204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:27:08.340 [2024-12-06 13:21:55.151217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.164695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.165114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:08.340 [2024-12-06 13:21:55.165173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.431 ms 00:27:08.340 [2024-12-06 13:21:55.165191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.178404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.178746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:08.340 [2024-12-06 13:21:55.178795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.082 ms 00:27:08.340 [2024-12-06 13:21:55.178820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.191118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.191215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:08.340 [2024-12-06 13:21:55.191255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:27:08.340 [2024-12-06 13:21:55.191267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.202887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-12-06 13:21:55.202972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:08.340 [2024-12-06 13:21:55.203009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.502 ms 00:27:08.340 [2024-12-06 13:21:55.203021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.340 [2024-12-06 13:21:55.203075] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:08.340 [2024-12-06 13:21:55.203099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.203994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:08.340 [2024-12-06 13:21:55.204676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:08.340 [2024-12-06 13:21:55.204700] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:27:08.340 [2024-12-06 13:21:55.204717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:08.340 [2024-12-06 13:21:55.204732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:08.340 [2024-12-06 13:21:55.204744] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:08.340 [2024-12-06 13:21:55.204760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:08.340 [2024-12-06 13:21:55.204771] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:08.340 [2024-12-06 13:21:55.204785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:08.341 [2024-12-06 13:21:55.204797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:08.341 [2024-12-06 13:21:55.204811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:08.341 [2024-12-06 13:21:55.204822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:08.341 [2024-12-06 13:21:55.204836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.341 [2024-12-06 13:21:55.204849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:08.341 [2024-12-06 13:21:55.204865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.765 ms 00:27:08.341 [2024-12-06 13:21:55.204877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.341 [2024-12-06 13:21:55.221598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.341 [2024-12-06 13:21:55.221671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:08.341 [2024-12-06 13:21:55.221715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.657 ms 00:27:08.341 [2024-12-06 13:21:55.221728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.341 [2024-12-06 13:21:55.222421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.341 [2024-12-06 13:21:55.222451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:08.341 [2024-12-06 13:21:55.222476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:27:08.341 [2024-12-06 13:21:55.222490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.341 [2024-12-06 13:21:55.280258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.341 [2024-12-06 13:21:55.280321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:08.341 [2024-12-06 13:21:55.280345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.341 [2024-12-06 13:21:55.280359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.341 [2024-12-06 13:21:55.280520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.341 [2024-12-06 13:21:55.280554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:08.341 [2024-12-06 13:21:55.280573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.341 [2024-12-06 13:21:55.280585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.341 [2024-12-06 13:21:55.280657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.341 [2024-12-06 13:21:55.280676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:08.341 [2024-12-06 13:21:55.280711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.341 [2024-12-06 13:21:55.280739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.341 [2024-12-06 13:21:55.280769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.341 [2024-12-06 13:21:55.280784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:08.341 [2024-12-06 13:21:55.280799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.341 [2024-12-06 13:21:55.280815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.391727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.391807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:08.598 [2024-12-06 13:21:55.391831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.391845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.480356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.480436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:08.598 [2024-12-06 13:21:55.480460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.480479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.480601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.480620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:08.598 [2024-12-06 13:21:55.480640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.480652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.480696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.480711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:08.598 [2024-12-06 13:21:55.480727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.480739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.480887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.480906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:08.598 [2024-12-06 13:21:55.480923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.480935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.480994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.481013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:08.598 [2024-12-06 13:21:55.481029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.481042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.481099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.481116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:08.598 [2024-12-06 13:21:55.481169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.481185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.481251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-12-06 13:21:55.481271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:08.598 [2024-12-06 13:21:55.481294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-12-06 13:21:55.481307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-12-06 13:21:55.481490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.805 ms, result 0 00:27:09.534 13:21:56 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:09.534 13:21:56 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:09.534 [2024-12-06 13:21:56.537667] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:27:09.534 [2024-12-06 13:21:56.537845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:27:09.793 [2024-12-06 13:21:56.714051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.051 [2024-12-06 13:21:56.826867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.309 [2024-12-06 13:21:57.178765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:10.309 [2024-12-06 13:21:57.178866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:10.568 [2024-12-06 13:21:57.342104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.568 [2024-12-06 13:21:57.342183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:10.568 [2024-12-06 13:21:57.342207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:10.568 [2024-12-06 13:21:57.342220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.568 [2024-12-06 13:21:57.346027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.346069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:10.569 [2024-12-06 13:21:57.346102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.764 ms 00:27:10.569 [2024-12-06 13:21:57.346113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.346351] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:10.569 [2024-12-06 13:21:57.347316] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:10.569 [2024-12-06 13:21:57.347498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.347520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:10.569 [2024-12-06 13:21:57.347549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:27:10.569 [2024-12-06 13:21:57.347579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.349742] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:10.569 [2024-12-06 13:21:57.366851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.366940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:10.569 [2024-12-06 13:21:57.366975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.111 ms 00:27:10.569 [2024-12-06 13:21:57.366988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.367105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.367148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:10.569 [2024-12-06 13:21:57.367182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:10.569 [2024-12-06 13:21:57.367195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.376348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.376397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:10.569 [2024-12-06 13:21:57.376433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.093 ms 00:27:10.569 [2024-12-06 13:21:57.376446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.376582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.376604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:10.569 [2024-12-06 13:21:57.376619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:10.569 [2024-12-06 13:21:57.376632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.376689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.376706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:10.569 [2024-12-06 13:21:57.376719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:10.569 [2024-12-06 13:21:57.376731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.376764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:10.569 [2024-12-06 13:21:57.381910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.381953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:10.569 [2024-12-06 13:21:57.381970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.154 ms 00:27:10.569 [2024-12-06 13:21:57.381983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.382077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.382098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:10.569 [2024-12-06 13:21:57.382111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:10.569 [2024-12-06 13:21:57.382150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.382196] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:10.569 [2024-12-06 13:21:57.382228] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:10.569 [2024-12-06 13:21:57.382282] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:10.569 [2024-12-06 13:21:57.382304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:10.569 [2024-12-06 13:21:57.382413] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:10.569 [2024-12-06 13:21:57.382430] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:10.569 [2024-12-06 13:21:57.382446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:10.569 [2024-12-06 13:21:57.382467] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:10.569 [2024-12-06 13:21:57.382482] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:10.569 [2024-12-06 13:21:57.382495] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:10.569 [2024-12-06 13:21:57.382508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:10.569 [2024-12-06 13:21:57.382519] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:10.569 [2024-12-06 13:21:57.382530] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:10.569 [2024-12-06 13:21:57.382544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.382556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:10.569 [2024-12-06 13:21:57.382569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:27:10.569 [2024-12-06 13:21:57.382581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.382682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.569 [2024-12-06 13:21:57.382703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:10.569 [2024-12-06 13:21:57.382717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:10.569 [2024-12-06 13:21:57.382728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.569 [2024-12-06 13:21:57.382842] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:10.569 [2024-12-06 13:21:57.382859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:10.569 [2024-12-06 13:21:57.382873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:10.569 [2024-12-06 13:21:57.382886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.569 [2024-12-06 13:21:57.382899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:10.569 [2024-12-06 13:21:57.382910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:10.569 [2024-12-06 13:21:57.382921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:10.569 [2024-12-06 13:21:57.382936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:10.569 [2024-12-06 13:21:57.382947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:10.569 [2024-12-06 13:21:57.382959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:10.569 [2024-12-06 13:21:57.382970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:10.569 [2024-12-06 13:21:57.382996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:10.569 [2024-12-06 13:21:57.383008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:10.569 [2024-12-06 13:21:57.383020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:10.569 [2024-12-06 13:21:57.383031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:10.569 [2024-12-06 13:21:57.383043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:10.569 [2024-12-06 13:21:57.383067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:10.569 [2024-12-06 13:21:57.383079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:10.569 [2024-12-06 13:21:57.383103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.569 [2024-12-06 13:21:57.383142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:10.569 [2024-12-06 13:21:57.383158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.569 [2024-12-06 13:21:57.383182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:10.569 [2024-12-06 13:21:57.383193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.569 [2024-12-06 13:21:57.383217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:10.569 [2024-12-06 13:21:57.383228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:10.569 [2024-12-06 13:21:57.383251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:10.569 [2024-12-06 13:21:57.383262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:10.569 [2024-12-06 13:21:57.383285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:10.569 [2024-12-06 13:21:57.383297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:10.569 [2024-12-06 13:21:57.383308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:10.569 [2024-12-06 13:21:57.383320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:10.569 [2024-12-06 13:21:57.383331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:10.569 [2024-12-06 13:21:57.383343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:10.569 [2024-12-06 13:21:57.383365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:10.569 [2024-12-06 13:21:57.383378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383389] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:10.569 [2024-12-06 13:21:57.383402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:10.569 [2024-12-06 13:21:57.383420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:10.569 [2024-12-06 13:21:57.383432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:10.569 [2024-12-06 13:21:57.383444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:10.570 [2024-12-06 13:21:57.383456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:10.570 [2024-12-06 13:21:57.383469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:10.570 [2024-12-06 13:21:57.383481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:10.570 [2024-12-06 13:21:57.383492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:10.570 [2024-12-06 13:21:57.383504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:10.570 [2024-12-06 13:21:57.383517] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:10.570 [2024-12-06 13:21:57.383532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:10.570 [2024-12-06 13:21:57.383546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:10.570 [2024-12-06 13:21:57.383559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:10.570 [2024-12-06 13:21:57.383571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:10.570 [2024-12-06 13:21:57.383584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:10.570 [2024-12-06 13:21:57.383596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:10.570 [2024-12-06 13:21:57.383608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:10.570 [2024-12-06 13:21:57.383621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:10.570 [2024-12-06 13:21:57.383633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:10.570 [2024-12-06 13:21:57.383646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:10.570 [2024-12-06 13:21:57.383659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:10.570 [2024-12-06 13:21:57.383671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:10.570 [2024-12-06 13:21:57.383683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:10.570 [2024-12-06 13:21:57.383695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:10.570 [2024-12-06 13:21:57.383708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:10.570 [2024-12-06 13:21:57.383721] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:10.570 [2024-12-06 13:21:57.383734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:10.570 [2024-12-06 13:21:57.383749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:10.570 [2024-12-06 13:21:57.383762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:10.570 [2024-12-06 13:21:57.383774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:10.570 [2024-12-06 13:21:57.383787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:10.570 [2024-12-06 13:21:57.383801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.383829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:10.570 [2024-12-06 13:21:57.383842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:27:10.570 [2024-12-06 13:21:57.383854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.425661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.425945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:10.570 [2024-12-06 13:21:57.426102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.724 ms 00:27:10.570 [2024-12-06 13:21:57.426184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.426566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.426727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:10.570 [2024-12-06 13:21:57.426848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:10.570 [2024-12-06 13:21:57.426898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.486178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.486449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:10.570 [2024-12-06 13:21:57.486580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.070 ms 00:27:10.570 [2024-12-06 13:21:57.486632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.486929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.487000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:10.570 [2024-12-06 13:21:57.487224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:10.570 [2024-12-06 13:21:57.487282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.487909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.488073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:10.570 [2024-12-06 13:21:57.488218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:27:10.570 [2024-12-06 13:21:57.488271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.488549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.488682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:10.570 [2024-12-06 13:21:57.488798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:27:10.570 [2024-12-06 13:21:57.488906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.509163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.509370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:10.570 [2024-12-06 13:21:57.509501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.181 ms 00:27:10.570 [2024-12-06 13:21:57.509618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.526221] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:10.570 [2024-12-06 13:21:57.526451] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:10.570 [2024-12-06 13:21:57.526589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.526699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:10.570 [2024-12-06 13:21:57.526750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.769 ms 00:27:10.570 [2024-12-06 13:21:57.526845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.556590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.556840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:10.570 [2024-12-06 13:21:57.556872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.606 ms 00:27:10.570 [2024-12-06 13:21:57.556889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.570 [2024-12-06 13:21:57.573637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.570 [2024-12-06 13:21:57.573685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:10.570 [2024-12-06 13:21:57.573718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.470 ms 00:27:10.570 [2024-12-06 13:21:57.573730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.589495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.589541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:10.829 [2024-12-06 13:21:57.589576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.671 ms 00:27:10.829 [2024-12-06 13:21:57.589604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.590610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.590772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:10.829 [2024-12-06 13:21:57.590800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:27:10.829 [2024-12-06 13:21:57.590814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.669640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.669725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:10.829 [2024-12-06 13:21:57.669750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.782 ms 00:27:10.829 [2024-12-06 13:21:57.669765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.682777] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:10.829 [2024-12-06 13:21:57.704763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.704844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:10.829 [2024-12-06 13:21:57.704868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.822 ms 00:27:10.829 [2024-12-06 13:21:57.704892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.705061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.705082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:10.829 [2024-12-06 13:21:57.705097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:10.829 [2024-12-06 13:21:57.705111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.705205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.705225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:10.829 [2024-12-06 13:21:57.705239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:10.829 [2024-12-06 13:21:57.705258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.705310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.705329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:10.829 [2024-12-06 13:21:57.705342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:10.829 [2024-12-06 13:21:57.705355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.705407] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:10.829 [2024-12-06 13:21:57.705425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.705437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:10.829 [2024-12-06 13:21:57.705450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:10.829 [2024-12-06 13:21:57.705463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.737256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.737330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:10.829 [2024-12-06 13:21:57.737364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.762 ms 00:27:10.829 [2024-12-06 13:21:57.737379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.737539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.829 [2024-12-06 13:21:57.737561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:10.829 [2024-12-06 13:21:57.737576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:10.829 [2024-12-06 13:21:57.737589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.829 [2024-12-06 13:21:57.738789] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:10.829 [2024-12-06 13:21:57.743256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.351 ms, result 0 00:27:10.829 [2024-12-06 13:21:57.744288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:10.829 [2024-12-06 13:21:57.760422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:11.763  [2024-12-06T13:22:00.154Z] Copying: 26/256 [MB] (26 MBps) [2024-12-06T13:22:01.089Z] Copying: 51/256 [MB] (24 MBps) [2024-12-06T13:22:02.026Z] Copying: 75/256 [MB] (24 MBps) [2024-12-06T13:22:02.975Z] Copying: 98/256 [MB] (23 MBps) [2024-12-06T13:22:03.912Z] Copying: 121/256 [MB] (22 MBps) [2024-12-06T13:22:04.848Z] Copying: 143/256 [MB] (22 MBps) [2024-12-06T13:22:05.784Z] Copying: 165/256 [MB] (22 MBps) [2024-12-06T13:22:07.158Z] Copying: 188/256 [MB] (22 MBps) [2024-12-06T13:22:08.095Z] Copying: 211/256 [MB] (23 MBps) [2024-12-06T13:22:09.031Z] Copying: 234/256 [MB] (22 MBps) [2024-12-06T13:22:09.031Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-06 13:22:08.700657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.015 [2024-12-06 13:22:08.713708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.713758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:22.015 [2024-12-06 13:22:08.713808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:22.015 [2024-12-06 13:22:08.713831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.713883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:22.015 [2024-12-06 13:22:08.717879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.717921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:22.015 [2024-12-06 13:22:08.717963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:27:22.015 [2024-12-06 13:22:08.717985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.718447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.718492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:22.015 [2024-12-06 13:22:08.718518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:27:22.015 [2024-12-06 13:22:08.718541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.722529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.722575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:22.015 [2024-12-06 13:22:08.722602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.920 ms 00:27:22.015 [2024-12-06 13:22:08.722624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.730298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.730345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:22.015 [2024-12-06 13:22:08.730372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.632 ms 00:27:22.015 [2024-12-06 13:22:08.730394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.759993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.760038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:22.015 [2024-12-06 13:22:08.760065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.477 ms 00:27:22.015 [2024-12-06 13:22:08.760085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.777459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.777502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:22.015 [2024-12-06 13:22:08.777560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.219 ms 00:27:22.015 [2024-12-06 13:22:08.777582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.777811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.777871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:22.015 [2024-12-06 13:22:08.777917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:27:22.015 [2024-12-06 13:22:08.777938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.807801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.807844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:22.015 [2024-12-06 13:22:08.807869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.827 ms 00:27:22.015 [2024-12-06 13:22:08.807888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.836993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.837036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:22.015 [2024-12-06 13:22:08.837061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.024 ms 00:27:22.015 [2024-12-06 13:22:08.837081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.866063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.866121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:22.015 [2024-12-06 13:22:08.866181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.850 ms 00:27:22.015 [2024-12-06 13:22:08.866204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.895746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.015 [2024-12-06 13:22:08.895805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:22.015 [2024-12-06 13:22:08.895832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.374 ms 00:27:22.015 [2024-12-06 13:22:08.895853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.015 [2024-12-06 13:22:08.895937] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:22.015 [2024-12-06 13:22:08.896000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:22.015 [2024-12-06 13:22:08.896599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.896992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.897988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:22.016 [2024-12-06 13:22:08.898278] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:22.016 [2024-12-06 13:22:08.898301] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:27:22.016 [2024-12-06 13:22:08.898323] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:22.016 [2024-12-06 13:22:08.898343] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:22.016 [2024-12-06 13:22:08.898362] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:22.016 [2024-12-06 13:22:08.898381] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:22.016 [2024-12-06 13:22:08.898401] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:22.016 [2024-12-06 13:22:08.898420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:22.016 [2024-12-06 13:22:08.898458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:22.016 [2024-12-06 13:22:08.898478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:22.016 [2024-12-06 13:22:08.898495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:22.016 [2024-12-06 13:22:08.898514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.016 [2024-12-06 13:22:08.898533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:22.016 [2024-12-06 13:22:08.898555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:27:22.016 [2024-12-06 13:22:08.898589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.016 [2024-12-06 13:22:08.917267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.016 [2024-12-06 13:22:08.917309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:22.016 [2024-12-06 13:22:08.917336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.625 ms 00:27:22.017 [2024-12-06 13:22:08.917374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.017 [2024-12-06 13:22:08.918038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.017 [2024-12-06 13:22:08.918082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:22.017 [2024-12-06 13:22:08.918109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:27:22.017 [2024-12-06 13:22:08.918179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.017 [2024-12-06 13:22:08.965037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.017 [2024-12-06 13:22:08.965086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.017 [2024-12-06 13:22:08.965114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.017 [2024-12-06 13:22:08.965161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.017 [2024-12-06 13:22:08.965351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.017 [2024-12-06 13:22:08.965413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.017 [2024-12-06 13:22:08.965452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.017 [2024-12-06 13:22:08.965473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.017 [2024-12-06 13:22:08.965565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.017 [2024-12-06 13:22:08.965595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.017 [2024-12-06 13:22:08.965619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.017 [2024-12-06 13:22:08.965638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.017 [2024-12-06 13:22:08.965697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.017 [2024-12-06 13:22:08.965722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.017 [2024-12-06 13:22:08.965743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.017 [2024-12-06 13:22:08.965762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.286 [2024-12-06 13:22:09.075793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.286 [2024-12-06 13:22:09.075873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.286 [2024-12-06 13:22:09.075903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.286 [2024-12-06 13:22:09.075922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.286 [2024-12-06 13:22:09.160686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.286 [2024-12-06 13:22:09.160751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.286 [2024-12-06 13:22:09.160796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.286 [2024-12-06 13:22:09.160817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.286 [2024-12-06 13:22:09.160947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.286 [2024-12-06 13:22:09.160976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.286 [2024-12-06 13:22:09.160998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.286 [2024-12-06 13:22:09.161018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.286 [2024-12-06 13:22:09.161076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.286 [2024-12-06 13:22:09.161138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.286 [2024-12-06 13:22:09.161204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.286 [2024-12-06 13:22:09.161227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.286 [2024-12-06 13:22:09.161413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.286 [2024-12-06 13:22:09.161443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.286 [2024-12-06 13:22:09.161466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.286 [2024-12-06 13:22:09.161487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.286 [2024-12-06 13:22:09.161573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.287 [2024-12-06 13:22:09.161603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:22.287 [2024-12-06 13:22:09.161646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.287 [2024-12-06 13:22:09.161667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.287 [2024-12-06 13:22:09.161743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.287 [2024-12-06 13:22:09.161768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.287 [2024-12-06 13:22:09.161789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.287 [2024-12-06 13:22:09.161809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.287 [2024-12-06 13:22:09.161893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.287 [2024-12-06 13:22:09.161938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.287 [2024-12-06 13:22:09.161960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.287 [2024-12-06 13:22:09.161979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.287 [2024-12-06 13:22:09.162302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 448.563 ms, result 0 00:27:23.235 00:27:23.235 00:27:23.235 13:22:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:27:23.235 13:22:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:23.800 13:22:10 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:24.062 [2024-12-06 13:22:10.869414] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:27:24.062 [2024-12-06 13:22:10.869624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78850 ] 00:27:24.062 [2024-12-06 13:22:11.044570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.320 [2024-12-06 13:22:11.165420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.579 [2024-12-06 13:22:11.527645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.579 [2024-12-06 13:22:11.527960] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.838 [2024-12-06 13:22:11.693184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.838 [2024-12-06 13:22:11.693432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:24.838 [2024-12-06 13:22:11.693490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:24.838 [2024-12-06 13:22:11.693511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.838 [2024-12-06 13:22:11.697136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.838 [2024-12-06 13:22:11.697181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:24.838 [2024-12-06 13:22:11.697208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:27:24.838 [2024-12-06 13:22:11.697228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.838 [2024-12-06 13:22:11.697437] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:24.838 [2024-12-06 13:22:11.698565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:24.838 [2024-12-06 13:22:11.698631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.838 [2024-12-06 13:22:11.698656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:24.838 [2024-12-06 13:22:11.698678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:27:24.838 [2024-12-06 13:22:11.698697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.701034] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:24.839 [2024-12-06 13:22:11.718061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.718108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:24.839 [2024-12-06 13:22:11.718173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.028 ms 00:27:24.839 [2024-12-06 13:22:11.718211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.718402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.718434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:24.839 [2024-12-06 13:22:11.718458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:24.839 [2024-12-06 13:22:11.718478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.727559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.727624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:24.839 [2024-12-06 13:22:11.727651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.988 ms 00:27:24.839 [2024-12-06 13:22:11.727671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.727835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.727865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:24.839 [2024-12-06 13:22:11.727888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:24.839 [2024-12-06 13:22:11.727910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.727993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.728034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:24.839 [2024-12-06 13:22:11.728057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:24.839 [2024-12-06 13:22:11.728076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.728132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:24.839 [2024-12-06 13:22:11.733410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.733632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:24.839 [2024-12-06 13:22:11.733671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.289 ms 00:27:24.839 [2024-12-06 13:22:11.733694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.733824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.733855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:24.839 [2024-12-06 13:22:11.733879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:24.839 [2024-12-06 13:22:11.733899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.733960] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:24.839 [2024-12-06 13:22:11.734006] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:24.839 [2024-12-06 13:22:11.734069] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:24.839 [2024-12-06 13:22:11.734106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:24.839 [2024-12-06 13:22:11.734314] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:24.839 [2024-12-06 13:22:11.734348] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:24.839 [2024-12-06 13:22:11.734374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:24.839 [2024-12-06 13:22:11.734409] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:24.839 [2024-12-06 13:22:11.734432] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:24.839 [2024-12-06 13:22:11.734454] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:24.839 [2024-12-06 13:22:11.734480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:24.839 [2024-12-06 13:22:11.734514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:24.839 [2024-12-06 13:22:11.734532] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:24.839 [2024-12-06 13:22:11.734554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.734588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:24.839 [2024-12-06 13:22:11.734608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:27:24.839 [2024-12-06 13:22:11.734627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.734755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.839 [2024-12-06 13:22:11.734789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:24.839 [2024-12-06 13:22:11.734810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:24.839 [2024-12-06 13:22:11.734845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.839 [2024-12-06 13:22:11.735005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:24.839 [2024-12-06 13:22:11.735035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:24.839 [2024-12-06 13:22:11.735058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:24.839 [2024-12-06 13:22:11.735078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:24.839 [2024-12-06 13:22:11.735117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:24.839 [2024-12-06 13:22:11.735156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:24.839 [2024-12-06 13:22:11.735176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:24.839 [2024-12-06 13:22:11.735246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:24.839 [2024-12-06 13:22:11.735281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:24.839 [2024-12-06 13:22:11.735300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:24.839 [2024-12-06 13:22:11.735317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:24.839 [2024-12-06 13:22:11.735335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:24.839 [2024-12-06 13:22:11.735353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:24.839 [2024-12-06 13:22:11.735388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:24.839 [2024-12-06 13:22:11.735405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:24.839 [2024-12-06 13:22:11.735441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.839 [2024-12-06 13:22:11.735492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:24.839 [2024-12-06 13:22:11.735511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.839 [2024-12-06 13:22:11.735546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:24.839 [2024-12-06 13:22:11.735581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.839 [2024-12-06 13:22:11.735617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:24.839 [2024-12-06 13:22:11.735636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.839 [2024-12-06 13:22:11.735675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:24.839 [2024-12-06 13:22:11.735695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:24.839 [2024-12-06 13:22:11.735733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:24.839 [2024-12-06 13:22:11.735753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:24.839 [2024-12-06 13:22:11.735772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:24.839 [2024-12-06 13:22:11.735806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:24.839 [2024-12-06 13:22:11.735830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:24.839 [2024-12-06 13:22:11.735848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:24.839 [2024-12-06 13:22:11.735884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:24.839 [2024-12-06 13:22:11.735902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.839 [2024-12-06 13:22:11.735938] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:24.839 [2024-12-06 13:22:11.735958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:24.839 [2024-12-06 13:22:11.735984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:24.839 [2024-12-06 13:22:11.736003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.839 [2024-12-06 13:22:11.736023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:24.839 [2024-12-06 13:22:11.736042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:24.839 [2024-12-06 13:22:11.736061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:24.839 [2024-12-06 13:22:11.736079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:24.839 [2024-12-06 13:22:11.736097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:24.839 [2024-12-06 13:22:11.736115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:24.840 [2024-12-06 13:22:11.736136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:24.840 [2024-12-06 13:22:11.736159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.840 [2024-12-06 13:22:11.736180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:24.840 [2024-12-06 13:22:11.736214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:24.840 [2024-12-06 13:22:11.736237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:24.840 [2024-12-06 13:22:11.736256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:24.840 [2024-12-06 13:22:11.736275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:24.840 [2024-12-06 13:22:11.736295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:24.840 [2024-12-06 13:22:11.736315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:24.840 [2024-12-06 13:22:11.736333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:24.840 [2024-12-06 13:22:11.736353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:24.840 [2024-12-06 13:22:11.736373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:24.840 [2024-12-06 13:22:11.736408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:24.840 [2024-12-06 13:22:11.736427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:24.840 [2024-12-06 13:22:11.736447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:24.840 [2024-12-06 13:22:11.736467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:24.840 [2024-12-06 13:22:11.736486] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:24.840 [2024-12-06 13:22:11.736508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.840 [2024-12-06 13:22:11.736531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:24.840 [2024-12-06 13:22:11.736550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:24.840 [2024-12-06 13:22:11.736601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:24.840 [2024-12-06 13:22:11.736622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:24.840 [2024-12-06 13:22:11.736645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.840 [2024-12-06 13:22:11.736675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:24.840 [2024-12-06 13:22:11.736697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:27:24.840 [2024-12-06 13:22:11.736716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.840 [2024-12-06 13:22:11.779936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.840 [2024-12-06 13:22:11.780318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:24.840 [2024-12-06 13:22:11.780485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.085 ms 00:27:24.840 [2024-12-06 13:22:11.780661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.840 [2024-12-06 13:22:11.780980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.840 [2024-12-06 13:22:11.781138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:24.840 [2024-12-06 13:22:11.781309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:24.840 [2024-12-06 13:22:11.781476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.840 [2024-12-06 13:22:11.840008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.840 [2024-12-06 13:22:11.840317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:24.840 [2024-12-06 13:22:11.840504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.330 ms 00:27:24.840 [2024-12-06 13:22:11.840675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.840 [2024-12-06 13:22:11.840945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.840 [2024-12-06 13:22:11.841022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:24.840 [2024-12-06 13:22:11.841257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:24.840 [2024-12-06 13:22:11.841339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.840 [2024-12-06 13:22:11.842223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.840 [2024-12-06 13:22:11.842393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:24.840 [2024-12-06 13:22:11.842556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:27:24.840 [2024-12-06 13:22:11.842717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.840 [2024-12-06 13:22:11.843017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.840 [2024-12-06 13:22:11.843169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:24.840 [2024-12-06 13:22:11.843362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:27:24.840 [2024-12-06 13:22:11.843457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:11.863937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:11.864157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.100 [2024-12-06 13:22:11.864324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.360 ms 00:27:25.100 [2024-12-06 13:22:11.864407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:11.881187] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:25.100 [2024-12-06 13:22:11.881228] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:25.100 [2024-12-06 13:22:11.881257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:11.881276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:25.100 [2024-12-06 13:22:11.881295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.487 ms 00:27:25.100 [2024-12-06 13:22:11.881312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:11.910405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:11.910572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:25.100 [2024-12-06 13:22:11.910612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.976 ms 00:27:25.100 [2024-12-06 13:22:11.910637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:11.926279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:11.926326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:25.100 [2024-12-06 13:22:11.926360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.510 ms 00:27:25.100 [2024-12-06 13:22:11.926380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:11.941630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:11.941690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:25.100 [2024-12-06 13:22:11.941718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.128 ms 00:27:25.100 [2024-12-06 13:22:11.941738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:11.942790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:11.942978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:25.100 [2024-12-06 13:22:11.943018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:27:25.100 [2024-12-06 13:22:11.943040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.020703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.021088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:25.100 [2024-12-06 13:22:12.021152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.605 ms 00:27:25.100 [2024-12-06 13:22:12.021178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.033883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:25.100 [2024-12-06 13:22:12.056089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.056174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:25.100 [2024-12-06 13:22:12.056222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.724 ms 00:27:25.100 [2024-12-06 13:22:12.056253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.056498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.056527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:25.100 [2024-12-06 13:22:12.056566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:25.100 [2024-12-06 13:22:12.056616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.056743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.056782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:25.100 [2024-12-06 13:22:12.056805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:25.100 [2024-12-06 13:22:12.056834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.056911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.056941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:25.100 [2024-12-06 13:22:12.056964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:25.100 [2024-12-06 13:22:12.056984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.057056] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:25.100 [2024-12-06 13:22:12.057085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.057104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:25.100 [2024-12-06 13:22:12.057128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:25.100 [2024-12-06 13:22:12.057148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.089886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.089931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:25.100 [2024-12-06 13:22:12.089959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.687 ms 00:27:25.100 [2024-12-06 13:22:12.089979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.090212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.100 [2024-12-06 13:22:12.090256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:25.100 [2024-12-06 13:22:12.090282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:27:25.100 [2024-12-06 13:22:12.090303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.100 [2024-12-06 13:22:12.091635] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:25.100 [2024-12-06 13:22:12.096035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.118 ms, result 0 00:27:25.100 [2024-12-06 13:22:12.097010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.100 [2024-12-06 13:22:12.113674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:25.359  [2024-12-06T13:22:12.375Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-12-06 13:22:12.295880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.359 [2024-12-06 13:22:12.307823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.359 [2024-12-06 13:22:12.307868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.359 [2024-12-06 13:22:12.307908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:25.359 [2024-12-06 13:22:12.307928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.359 [2024-12-06 13:22:12.307971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:25.359 [2024-12-06 13:22:12.311848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.359 [2024-12-06 13:22:12.311905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.359 [2024-12-06 13:22:12.311931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.844 ms 00:27:25.359 [2024-12-06 13:22:12.311968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.359 [2024-12-06 13:22:12.313841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.359 [2024-12-06 13:22:12.313887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.359 [2024-12-06 13:22:12.313915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:27:25.359 [2024-12-06 13:22:12.313935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.359 [2024-12-06 13:22:12.318038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.359 [2024-12-06 13:22:12.318099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.360 [2024-12-06 13:22:12.318159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:27:25.360 [2024-12-06 13:22:12.318185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.360 [2024-12-06 13:22:12.325519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.360 [2024-12-06 13:22:12.325575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.360 [2024-12-06 13:22:12.325600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.253 ms 00:27:25.360 [2024-12-06 13:22:12.325619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.360 [2024-12-06 13:22:12.355689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.360 [2024-12-06 13:22:12.355752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.360 [2024-12-06 13:22:12.355795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.992 ms 00:27:25.360 [2024-12-06 13:22:12.355816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.619 [2024-12-06 13:22:12.374639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.619 [2024-12-06 13:22:12.374695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.619 [2024-12-06 13:22:12.374724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.735 ms 00:27:25.619 [2024-12-06 13:22:12.374745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.619 [2024-12-06 13:22:12.374987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.619 [2024-12-06 13:22:12.375017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.619 [2024-12-06 13:22:12.375055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:27:25.619 [2024-12-06 13:22:12.375074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.619 [2024-12-06 13:22:12.406632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.619 [2024-12-06 13:22:12.406678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.619 [2024-12-06 13:22:12.406704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.506 ms 00:27:25.619 [2024-12-06 13:22:12.406723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.619 [2024-12-06 13:22:12.437020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.619 [2024-12-06 13:22:12.437066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.619 [2024-12-06 13:22:12.437092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.215 ms 00:27:25.619 [2024-12-06 13:22:12.437111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.619 [2024-12-06 13:22:12.465971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.619 [2024-12-06 13:22:12.466013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.619 [2024-12-06 13:22:12.466038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.756 ms 00:27:25.620 [2024-12-06 13:22:12.466057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.620 [2024-12-06 13:22:12.495005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.620 [2024-12-06 13:22:12.495048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.620 [2024-12-06 13:22:12.495089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.715 ms 00:27:25.620 [2024-12-06 13:22:12.495108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.620 [2024-12-06 13:22:12.495222] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.620 [2024-12-06 13:22:12.495257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.495984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.496983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.497003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.497023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.497043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.497062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.620 [2024-12-06 13:22:12.497081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.621 [2024-12-06 13:22:12.497539] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.621 [2024-12-06 13:22:12.497561] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:27:25.621 [2024-12-06 13:22:12.497581] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:25.621 [2024-12-06 13:22:12.497600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:25.621 [2024-12-06 13:22:12.497619] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:25.621 [2024-12-06 13:22:12.497639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:25.621 [2024-12-06 13:22:12.497657] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.621 [2024-12-06 13:22:12.497677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.621 [2024-12-06 13:22:12.497705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.621 [2024-12-06 13:22:12.497724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.621 [2024-12-06 13:22:12.497741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.621 [2024-12-06 13:22:12.497763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.621 [2024-12-06 13:22:12.497783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.621 [2024-12-06 13:22:12.497805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:27:25.621 [2024-12-06 13:22:12.497825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.621 [2024-12-06 13:22:12.515721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.621 [2024-12-06 13:22:12.515764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.621 [2024-12-06 13:22:12.515792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.854 ms 00:27:25.621 [2024-12-06 13:22:12.515812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.621 [2024-12-06 13:22:12.516470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.621 [2024-12-06 13:22:12.516512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.621 [2024-12-06 13:22:12.516538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:27:25.621 [2024-12-06 13:22:12.516560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.621 [2024-12-06 13:22:12.561331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.621 [2024-12-06 13:22:12.561379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.621 [2024-12-06 13:22:12.561404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.621 [2024-12-06 13:22:12.561445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.621 [2024-12-06 13:22:12.561570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.621 [2024-12-06 13:22:12.561598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.621 [2024-12-06 13:22:12.561620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.621 [2024-12-06 13:22:12.561639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.621 [2024-12-06 13:22:12.561729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.621 [2024-12-06 13:22:12.561755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.621 [2024-12-06 13:22:12.561792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.621 [2024-12-06 13:22:12.561824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.621 [2024-12-06 13:22:12.561872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.621 [2024-12-06 13:22:12.561893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.621 [2024-12-06 13:22:12.561913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.621 [2024-12-06 13:22:12.561931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.879 [2024-12-06 13:22:12.664314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.879 [2024-12-06 13:22:12.664393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.879 [2024-12-06 13:22:12.664423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.879 [2024-12-06 13:22:12.664452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.879 [2024-12-06 13:22:12.750930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.879 [2024-12-06 13:22:12.750993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.879 [2024-12-06 13:22:12.751022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.880 [2024-12-06 13:22:12.751041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.880 [2024-12-06 13:22:12.751225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.880 [2024-12-06 13:22:12.751270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.880 [2024-12-06 13:22:12.751299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.880 [2024-12-06 13:22:12.751319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.880 [2024-12-06 13:22:12.751375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.880 [2024-12-06 13:22:12.751408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.880 [2024-12-06 13:22:12.751446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.880 [2024-12-06 13:22:12.751465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.880 [2024-12-06 13:22:12.751669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.880 [2024-12-06 13:22:12.751698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.880 [2024-12-06 13:22:12.751721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.880 [2024-12-06 13:22:12.751740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.880 [2024-12-06 13:22:12.751819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.880 [2024-12-06 13:22:12.751846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.880 [2024-12-06 13:22:12.751893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.880 [2024-12-06 13:22:12.751914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.880 [2024-12-06 13:22:12.752001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.880 [2024-12-06 13:22:12.752026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.880 [2024-12-06 13:22:12.752046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.880 [2024-12-06 13:22:12.752064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.880 [2024-12-06 13:22:12.752148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.880 [2024-12-06 13:22:12.752182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.880 [2024-12-06 13:22:12.752204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.880 [2024-12-06 13:22:12.752237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.880 [2024-12-06 13:22:12.752521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.661 ms, result 0 00:27:26.814 00:27:26.814 00:27:26.814 13:22:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78885 00:27:26.814 13:22:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:26.814 13:22:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78885 00:27:26.814 13:22:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78885 ']' 00:27:26.814 13:22:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.814 13:22:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.814 13:22:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.814 13:22:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.814 13:22:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:27.072 [2024-12-06 13:22:13.886400] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:27:27.072 [2024-12-06 13:22:13.886586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78885 ] 00:27:27.072 [2024-12-06 13:22:14.069312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.331 [2024-12-06 13:22:14.190459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.274 13:22:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.274 13:22:15 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:28.274 13:22:15 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:28.534 [2024-12-06 13:22:15.289916] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.534 [2024-12-06 13:22:15.289993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.534 [2024-12-06 13:22:15.481495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.534 [2024-12-06 13:22:15.481567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:28.534 [2024-12-06 13:22:15.481607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:28.535 [2024-12-06 13:22:15.481627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.485868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.485915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.535 [2024-12-06 13:22:15.485947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:27:28.535 [2024-12-06 13:22:15.485968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.486192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:28.535 [2024-12-06 13:22:15.487308] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:28.535 [2024-12-06 13:22:15.487356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.487380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.535 [2024-12-06 13:22:15.487405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:27:28.535 [2024-12-06 13:22:15.487425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.489659] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:28.535 [2024-12-06 13:22:15.506457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.506518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:28.535 [2024-12-06 13:22:15.506563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.803 ms 00:27:28.535 [2024-12-06 13:22:15.506612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.506807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.506853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:28.535 [2024-12-06 13:22:15.506896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:28.535 [2024-12-06 13:22:15.506928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.516020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.516087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.535 [2024-12-06 13:22:15.516115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.926 ms 00:27:28.535 [2024-12-06 13:22:15.516208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.516470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.516517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.535 [2024-12-06 13:22:15.516543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:27:28.535 [2024-12-06 13:22:15.516606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.516673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.516713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:28.535 [2024-12-06 13:22:15.516737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:28.535 [2024-12-06 13:22:15.516768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.516859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:28.535 [2024-12-06 13:22:15.521984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.522026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.535 [2024-12-06 13:22:15.522065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:27:28.535 [2024-12-06 13:22:15.522088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.522282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.522314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:28.535 [2024-12-06 13:22:15.522350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:28.535 [2024-12-06 13:22:15.522383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.522444] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:28.535 [2024-12-06 13:22:15.522500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:28.535 [2024-12-06 13:22:15.522602] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:28.535 [2024-12-06 13:22:15.522653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:28.535 [2024-12-06 13:22:15.522824] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:28.535 [2024-12-06 13:22:15.522855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:28.535 [2024-12-06 13:22:15.522922] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:28.535 [2024-12-06 13:22:15.522950] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:28.535 [2024-12-06 13:22:15.522984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:28.535 [2024-12-06 13:22:15.523008] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:28.535 [2024-12-06 13:22:15.523037] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:28.535 [2024-12-06 13:22:15.523059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:28.535 [2024-12-06 13:22:15.523100] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:28.535 [2024-12-06 13:22:15.523157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.523193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:28.535 [2024-12-06 13:22:15.523218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:27:28.535 [2024-12-06 13:22:15.523249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.523402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.535 [2024-12-06 13:22:15.523454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:28.535 [2024-12-06 13:22:15.523478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:28.535 [2024-12-06 13:22:15.523508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.535 [2024-12-06 13:22:15.523648] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:28.535 [2024-12-06 13:22:15.523687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:28.535 [2024-12-06 13:22:15.523710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.535 [2024-12-06 13:22:15.523739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.535 [2024-12-06 13:22:15.523761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:28.535 [2024-12-06 13:22:15.523793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:28.536 [2024-12-06 13:22:15.523814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:28.536 [2024-12-06 13:22:15.523848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:28.536 [2024-12-06 13:22:15.523868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:28.536 [2024-12-06 13:22:15.523899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.536 [2024-12-06 13:22:15.523919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:28.536 [2024-12-06 13:22:15.523947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:28.536 [2024-12-06 13:22:15.523966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.536 [2024-12-06 13:22:15.523994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:28.536 [2024-12-06 13:22:15.524014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:28.536 [2024-12-06 13:22:15.524041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:28.536 [2024-12-06 13:22:15.524088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:28.536 [2024-12-06 13:22:15.524141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:28.536 [2024-12-06 13:22:15.524197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.536 [2024-12-06 13:22:15.524245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:28.536 [2024-12-06 13:22:15.524279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.536 [2024-12-06 13:22:15.524327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:28.536 [2024-12-06 13:22:15.524347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.536 [2024-12-06 13:22:15.524393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:28.536 [2024-12-06 13:22:15.524422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.536 [2024-12-06 13:22:15.524494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:28.536 [2024-12-06 13:22:15.524515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.536 [2024-12-06 13:22:15.524562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:28.536 [2024-12-06 13:22:15.524604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:28.536 [2024-12-06 13:22:15.524625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.536 [2024-12-06 13:22:15.524653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:28.536 [2024-12-06 13:22:15.524674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:28.536 [2024-12-06 13:22:15.524711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:28.536 [2024-12-06 13:22:15.524762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:28.536 [2024-12-06 13:22:15.524783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524810] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:28.536 [2024-12-06 13:22:15.524855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:28.536 [2024-12-06 13:22:15.524883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.536 [2024-12-06 13:22:15.524905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.536 [2024-12-06 13:22:15.524934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:28.536 [2024-12-06 13:22:15.524955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:28.536 [2024-12-06 13:22:15.524983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:28.536 [2024-12-06 13:22:15.525017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:28.536 [2024-12-06 13:22:15.525045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:28.536 [2024-12-06 13:22:15.525066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:28.536 [2024-12-06 13:22:15.525108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:28.536 [2024-12-06 13:22:15.525537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.536 [2024-12-06 13:22:15.525698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:28.536 [2024-12-06 13:22:15.525982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:28.536 [2024-12-06 13:22:15.526218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:28.536 [2024-12-06 13:22:15.526449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:28.536 [2024-12-06 13:22:15.526656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:28.536 [2024-12-06 13:22:15.526896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:28.536 [2024-12-06 13:22:15.527010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:28.536 [2024-12-06 13:22:15.527257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:28.536 [2024-12-06 13:22:15.527374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:28.536 [2024-12-06 13:22:15.527574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:28.536 [2024-12-06 13:22:15.527689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:28.536 [2024-12-06 13:22:15.527884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:28.536 [2024-12-06 13:22:15.528006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:28.536 [2024-12-06 13:22:15.528121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:28.536 [2024-12-06 13:22:15.528326] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:28.536 [2024-12-06 13:22:15.528516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.536 [2024-12-06 13:22:15.528648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:28.537 [2024-12-06 13:22:15.528756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:28.537 [2024-12-06 13:22:15.528922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:28.537 [2024-12-06 13:22:15.528953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:28.537 [2024-12-06 13:22:15.528989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.537 [2024-12-06 13:22:15.529013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:28.537 [2024-12-06 13:22:15.529052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.402 ms 00:27:28.537 [2024-12-06 13:22:15.529084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.795 [2024-12-06 13:22:15.569791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.795 [2024-12-06 13:22:15.570073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.795 [2024-12-06 13:22:15.570158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.489 ms 00:27:28.795 [2024-12-06 13:22:15.570198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.795 [2024-12-06 13:22:15.570477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.795 [2024-12-06 13:22:15.570509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:28.795 [2024-12-06 13:22:15.570543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:28.795 [2024-12-06 13:22:15.570566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.795 [2024-12-06 13:22:15.615849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.795 [2024-12-06 13:22:15.616088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.795 [2024-12-06 13:22:15.616192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.229 ms 00:27:28.795 [2024-12-06 13:22:15.616219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.795 [2024-12-06 13:22:15.616425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.795 [2024-12-06 13:22:15.616469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.795 [2024-12-06 13:22:15.616520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.795 [2024-12-06 13:22:15.616543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.795 [2024-12-06 13:22:15.617335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.795 [2024-12-06 13:22:15.617378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.795 [2024-12-06 13:22:15.617434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:27:28.795 [2024-12-06 13:22:15.617457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.795 [2024-12-06 13:22:15.617758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.795 [2024-12-06 13:22:15.617811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.795 [2024-12-06 13:22:15.617882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:27:28.795 [2024-12-06 13:22:15.617912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.795 [2024-12-06 13:22:15.639943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.796 [2024-12-06 13:22:15.640177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.796 [2024-12-06 13:22:15.640255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.975 ms 00:27:28.796 [2024-12-06 13:22:15.640282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.796 [2024-12-06 13:22:15.670061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:28.796 [2024-12-06 13:22:15.670105] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:28.796 [2024-12-06 13:22:15.670178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.796 [2024-12-06 13:22:15.670214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:28.796 [2024-12-06 13:22:15.670295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.669 ms 00:27:28.796 [2024-12-06 13:22:15.670338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.796 [2024-12-06 13:22:15.696396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.796 [2024-12-06 13:22:15.696456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:28.796 [2024-12-06 13:22:15.696495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.925 ms 00:27:28.796 [2024-12-06 13:22:15.696518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.796 [2024-12-06 13:22:15.711150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.796 [2024-12-06 13:22:15.711199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:28.796 [2024-12-06 13:22:15.711242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.466 ms 00:27:28.796 [2024-12-06 13:22:15.711264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.796 [2024-12-06 13:22:15.726686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.796 [2024-12-06 13:22:15.726731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:28.796 [2024-12-06 13:22:15.726772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.302 ms 00:27:28.796 [2024-12-06 13:22:15.726795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.796 [2024-12-06 13:22:15.727847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.796 [2024-12-06 13:22:15.728025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:28.796 [2024-12-06 13:22:15.728078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:27:28.796 [2024-12-06 13:22:15.728104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.796 [2024-12-06 13:22:15.804680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.796 [2024-12-06 13:22:15.804783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:28.796 [2024-12-06 13:22:15.804818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.498 ms 00:27:28.796 [2024-12-06 13:22:15.804837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.054 [2024-12-06 13:22:15.816351] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:29.054 [2024-12-06 13:22:15.836111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.054 [2024-12-06 13:22:15.836196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:29.054 [2024-12-06 13:22:15.836239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.099 ms 00:27:29.054 [2024-12-06 13:22:15.836261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.054 [2024-12-06 13:22:15.836459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.054 [2024-12-06 13:22:15.836493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:29.054 [2024-12-06 13:22:15.836517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:29.054 [2024-12-06 13:22:15.836543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.054 [2024-12-06 13:22:15.836674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.054 [2024-12-06 13:22:15.836706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:29.054 [2024-12-06 13:22:15.836728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:29.054 [2024-12-06 13:22:15.836756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.054 [2024-12-06 13:22:15.836810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.054 [2024-12-06 13:22:15.836843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:29.054 [2024-12-06 13:22:15.836866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:29.054 [2024-12-06 13:22:15.836896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.055 [2024-12-06 13:22:15.836993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:29.055 [2024-12-06 13:22:15.837029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.055 [2024-12-06 13:22:15.837056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:29.055 [2024-12-06 13:22:15.837080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:29.055 [2024-12-06 13:22:15.837101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.055 [2024-12-06 13:22:15.869782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.055 [2024-12-06 13:22:15.869831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:29.055 [2024-12-06 13:22:15.869880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.620 ms 00:27:29.055 [2024-12-06 13:22:15.869905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.055 [2024-12-06 13:22:15.870104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.055 [2024-12-06 13:22:15.870150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:29.055 [2024-12-06 13:22:15.870233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:29.055 [2024-12-06 13:22:15.870283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.055 [2024-12-06 13:22:15.871746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:29.055 [2024-12-06 13:22:15.876096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.754 ms, result 0 00:27:29.055 [2024-12-06 13:22:15.877333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:29.055 Some configs were skipped because the RPC state that can call them passed over. 00:27:29.055 13:22:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:29.339 [2024-12-06 13:22:16.151799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.339 [2024-12-06 13:22:16.152002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:29.339 [2024-12-06 13:22:16.152233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:27:29.339 [2024-12-06 13:22:16.152411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.339 [2024-12-06 13:22:16.152675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.561 ms, result 0 00:27:29.339 true 00:27:29.339 13:22:16 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:29.602 [2024-12-06 13:22:16.428018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.602 [2024-12-06 13:22:16.428324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:29.602 [2024-12-06 13:22:16.428544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:27:29.602 [2024-12-06 13:22:16.428739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.602 [2024-12-06 13:22:16.428895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.235 ms, result 0 00:27:29.602 true 00:27:29.602 13:22:16 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78885 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78885 ']' 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78885 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78885 00:27:29.602 killing process with pid 78885 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78885' 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78885 00:27:29.602 13:22:16 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78885 00:27:30.539 [2024-12-06 13:22:17.425696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.425771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:30.539 [2024-12-06 13:22:17.425801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:30.539 [2024-12-06 13:22:17.425822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.425868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:30.539 [2024-12-06 13:22:17.429599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.429638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:30.539 [2024-12-06 13:22:17.429671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:27:30.539 [2024-12-06 13:22:17.429689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.430107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.430153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:30.539 [2024-12-06 13:22:17.430182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:27:30.539 [2024-12-06 13:22:17.430200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.434128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.434218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:30.539 [2024-12-06 13:22:17.434305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.885 ms 00:27:30.539 [2024-12-06 13:22:17.434330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.441111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.441326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:30.539 [2024-12-06 13:22:17.441373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.695 ms 00:27:30.539 [2024-12-06 13:22:17.441396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.452872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.453109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:30.539 [2024-12-06 13:22:17.453167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.362 ms 00:27:30.539 [2024-12-06 13:22:17.453192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.461663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.461709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:30.539 [2024-12-06 13:22:17.461737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.375 ms 00:27:30.539 [2024-12-06 13:22:17.461773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.461966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.462027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:30.539 [2024-12-06 13:22:17.462057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:27:30.539 [2024-12-06 13:22:17.462077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.474090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.474148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:30.539 [2024-12-06 13:22:17.474195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.961 ms 00:27:30.539 [2024-12-06 13:22:17.474214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.485819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.485862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:30.539 [2024-12-06 13:22:17.485897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.518 ms 00:27:30.539 [2024-12-06 13:22:17.485918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.497059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.497291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:30.539 [2024-12-06 13:22:17.497337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.079 ms 00:27:30.539 [2024-12-06 13:22:17.497359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.508718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.539 [2024-12-06 13:22:17.508903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:30.539 [2024-12-06 13:22:17.509076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.246 ms 00:27:30.539 [2024-12-06 13:22:17.509295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.539 [2024-12-06 13:22:17.509428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:30.539 [2024-12-06 13:22:17.509661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.509992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:30.539 [2024-12-06 13:22:17.510390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.510994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.511985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.512006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.512041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.512064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.512096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:30.540 [2024-12-06 13:22:17.512164] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:30.540 [2024-12-06 13:22:17.512218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:27:30.540 [2024-12-06 13:22:17.512258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:30.540 [2024-12-06 13:22:17.512288] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:30.540 [2024-12-06 13:22:17.512310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:30.540 [2024-12-06 13:22:17.512340] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:30.540 [2024-12-06 13:22:17.512361] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:30.540 [2024-12-06 13:22:17.512391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:30.540 [2024-12-06 13:22:17.512412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:30.540 [2024-12-06 13:22:17.512440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:30.540 [2024-12-06 13:22:17.512461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:30.540 [2024-12-06 13:22:17.512491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.540 [2024-12-06 13:22:17.512514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:30.540 [2024-12-06 13:22:17.512546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.063 ms 00:27:30.540 [2024-12-06 13:22:17.512568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.540 [2024-12-06 13:22:17.529053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.540 [2024-12-06 13:22:17.529293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:30.540 [2024-12-06 13:22:17.529356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.368 ms 00:27:30.540 [2024-12-06 13:22:17.529384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.540 [2024-12-06 13:22:17.530022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.540 [2024-12-06 13:22:17.530053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:30.540 [2024-12-06 13:22:17.530099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:27:30.541 [2024-12-06 13:22:17.530119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.800 [2024-12-06 13:22:17.585214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.800 [2024-12-06 13:22:17.585266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:30.800 [2024-12-06 13:22:17.585304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.800 [2024-12-06 13:22:17.585325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.800 [2024-12-06 13:22:17.585489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.800 [2024-12-06 13:22:17.585518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:30.800 [2024-12-06 13:22:17.585575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.800 [2024-12-06 13:22:17.585596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.585700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.585732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:30.801 [2024-12-06 13:22:17.585772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.585793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.585845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.585885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:30.801 [2024-12-06 13:22:17.585947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.585980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.679338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.679412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:30.801 [2024-12-06 13:22:17.679451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.679472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.755025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.755084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:30.801 [2024-12-06 13:22:17.755123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.755227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.755404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.755432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:30.801 [2024-12-06 13:22:17.755470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.755493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.755626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.755653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:30.801 [2024-12-06 13:22:17.755683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.755706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.755937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.755997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:30.801 [2024-12-06 13:22:17.756035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.756058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.756236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.756263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:30.801 [2024-12-06 13:22:17.756296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.756319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.756415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.756442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:30.801 [2024-12-06 13:22:17.756481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.756518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.756666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:30.801 [2024-12-06 13:22:17.756706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:30.801 [2024-12-06 13:22:17.756737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:30.801 [2024-12-06 13:22:17.756758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.801 [2024-12-06 13:22:17.757104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.296 ms, result 0 00:27:31.738 13:22:18 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:31.997 [2024-12-06 13:22:18.765375] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:27:31.997 [2024-12-06 13:22:18.765571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78943 ] 00:27:31.997 [2024-12-06 13:22:18.950350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.256 [2024-12-06 13:22:19.071922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.514 [2024-12-06 13:22:19.414761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.514 [2024-12-06 13:22:19.415111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.775 [2024-12-06 13:22:19.576461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.576516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:32.775 [2024-12-06 13:22:19.576545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:32.775 [2024-12-06 13:22:19.576563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.579944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.579990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.775 [2024-12-06 13:22:19.580016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:27:32.775 [2024-12-06 13:22:19.580035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.580240] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:32.775 [2024-12-06 13:22:19.581325] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:32.775 [2024-12-06 13:22:19.581369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.581393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.775 [2024-12-06 13:22:19.581414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:27:32.775 [2024-12-06 13:22:19.581431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.583655] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:32.775 [2024-12-06 13:22:19.599676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.599721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:32.775 [2024-12-06 13:22:19.599748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.023 ms 00:27:32.775 [2024-12-06 13:22:19.599767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.599927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.599954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:32.775 [2024-12-06 13:22:19.599974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:32.775 [2024-12-06 13:22:19.599991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.609502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.609722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.775 [2024-12-06 13:22:19.609767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.431 ms 00:27:32.775 [2024-12-06 13:22:19.609789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.610010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.610041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.775 [2024-12-06 13:22:19.610065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:32.775 [2024-12-06 13:22:19.610084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.610405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.610561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:32.775 [2024-12-06 13:22:19.610605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:32.775 [2024-12-06 13:22:19.610628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.775 [2024-12-06 13:22:19.610699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:32.775 [2024-12-06 13:22:19.616103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.775 [2024-12-06 13:22:19.616172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.775 [2024-12-06 13:22:19.616200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.419 ms 00:27:32.776 [2024-12-06 13:22:19.616220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.776 [2024-12-06 13:22:19.616325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.776 [2024-12-06 13:22:19.616354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:32.776 [2024-12-06 13:22:19.616375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:32.776 [2024-12-06 13:22:19.616394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.776 [2024-12-06 13:22:19.616449] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:32.776 [2024-12-06 13:22:19.616522] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:32.776 [2024-12-06 13:22:19.616576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:32.776 [2024-12-06 13:22:19.616610] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:32.776 [2024-12-06 13:22:19.616735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:32.776 [2024-12-06 13:22:19.616761] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:32.776 [2024-12-06 13:22:19.616782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:32.776 [2024-12-06 13:22:19.616810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:32.776 [2024-12-06 13:22:19.616831] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:32.776 [2024-12-06 13:22:19.616850] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:32.776 [2024-12-06 13:22:19.616867] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:32.776 [2024-12-06 13:22:19.616889] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:32.776 [2024-12-06 13:22:19.616905] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:32.776 [2024-12-06 13:22:19.616923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.776 [2024-12-06 13:22:19.616940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:32.776 [2024-12-06 13:22:19.616959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:27:32.776 [2024-12-06 13:22:19.616974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.776 [2024-12-06 13:22:19.617089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.776 [2024-12-06 13:22:19.617119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:32.776 [2024-12-06 13:22:19.617154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:32.776 [2024-12-06 13:22:19.617488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.776 [2024-12-06 13:22:19.617726] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:32.776 [2024-12-06 13:22:19.617834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:32.776 [2024-12-06 13:22:19.617901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.776 [2024-12-06 13:22:19.617982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.776 [2024-12-06 13:22:19.618163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:32.776 [2024-12-06 13:22:19.618334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:32.776 [2024-12-06 13:22:19.618497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:32.776 [2024-12-06 13:22:19.618680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:32.776 [2024-12-06 13:22:19.618870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:32.776 [2024-12-06 13:22:19.618919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.776 [2024-12-06 13:22:19.618939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:32.776 [2024-12-06 13:22:19.618975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:32.776 [2024-12-06 13:22:19.618994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.776 [2024-12-06 13:22:19.619012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:32.776 [2024-12-06 13:22:19.619029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:32.776 [2024-12-06 13:22:19.619046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:32.776 [2024-12-06 13:22:19.619080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:32.776 [2024-12-06 13:22:19.619097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:32.776 [2024-12-06 13:22:19.619132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.776 [2024-12-06 13:22:19.619207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:32.776 [2024-12-06 13:22:19.619225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.776 [2024-12-06 13:22:19.619260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:32.776 [2024-12-06 13:22:19.619278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.776 [2024-12-06 13:22:19.619314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:32.776 [2024-12-06 13:22:19.619331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.776 [2024-12-06 13:22:19.619367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:32.776 [2024-12-06 13:22:19.619385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.776 [2024-12-06 13:22:19.619419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:32.776 [2024-12-06 13:22:19.619438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:32.776 [2024-12-06 13:22:19.619455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.776 [2024-12-06 13:22:19.619488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:32.776 [2024-12-06 13:22:19.619506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:32.776 [2024-12-06 13:22:19.619551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:32.776 [2024-12-06 13:22:19.619588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:32.776 [2024-12-06 13:22:19.619605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619622] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:32.776 [2024-12-06 13:22:19.619642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:32.776 [2024-12-06 13:22:19.619668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.776 [2024-12-06 13:22:19.619687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.776 [2024-12-06 13:22:19.619705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:32.776 [2024-12-06 13:22:19.619723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:32.776 [2024-12-06 13:22:19.619739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:32.776 [2024-12-06 13:22:19.619756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:32.776 [2024-12-06 13:22:19.619773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:32.776 [2024-12-06 13:22:19.619790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:32.776 [2024-12-06 13:22:19.619809] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:32.776 [2024-12-06 13:22:19.619831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.776 [2024-12-06 13:22:19.619862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:32.776 [2024-12-06 13:22:19.619881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:32.776 [2024-12-06 13:22:19.619899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:32.776 [2024-12-06 13:22:19.619926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:32.776 [2024-12-06 13:22:19.619945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:32.776 [2024-12-06 13:22:19.619963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:32.777 [2024-12-06 13:22:19.619981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:32.777 [2024-12-06 13:22:19.619999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:32.777 [2024-12-06 13:22:19.620017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:32.777 [2024-12-06 13:22:19.620034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:32.777 [2024-12-06 13:22:19.620053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:32.777 [2024-12-06 13:22:19.620072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:32.777 [2024-12-06 13:22:19.620090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:32.777 [2024-12-06 13:22:19.620110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:32.777 [2024-12-06 13:22:19.620126] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:32.777 [2024-12-06 13:22:19.620178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.777 [2024-12-06 13:22:19.620227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.777 [2024-12-06 13:22:19.620255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:32.777 [2024-12-06 13:22:19.620276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:32.777 [2024-12-06 13:22:19.620314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:32.777 [2024-12-06 13:22:19.620338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.620366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:32.777 [2024-12-06 13:22:19.620388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:27:32.777 [2024-12-06 13:22:19.620407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.659360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.659421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.777 [2024-12-06 13:22:19.659450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.741 ms 00:27:32.777 [2024-12-06 13:22:19.659468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.659702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.659730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:32.777 [2024-12-06 13:22:19.659751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:32.777 [2024-12-06 13:22:19.659769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.711044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.711330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.777 [2024-12-06 13:22:19.711379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.229 ms 00:27:32.777 [2024-12-06 13:22:19.711402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.711615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.711691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.777 [2024-12-06 13:22:19.711718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:32.777 [2024-12-06 13:22:19.711738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.712467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.712508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.777 [2024-12-06 13:22:19.712545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:27:32.777 [2024-12-06 13:22:19.712580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.712848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.712937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.777 [2024-12-06 13:22:19.712965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:27:32.777 [2024-12-06 13:22:19.712991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.731629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.731674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.777 [2024-12-06 13:22:19.731701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.586 ms 00:27:32.777 [2024-12-06 13:22:19.731721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.747257] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:32.777 [2024-12-06 13:22:19.747300] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:32.777 [2024-12-06 13:22:19.747327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.747348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:32.777 [2024-12-06 13:22:19.747367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.409 ms 00:27:32.777 [2024-12-06 13:22:19.747383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.777 [2024-12-06 13:22:19.773517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.777 [2024-12-06 13:22:19.773561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:32.777 [2024-12-06 13:22:19.773587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.017 ms 00:27:32.777 [2024-12-06 13:22:19.773606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.787819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.787863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:33.037 [2024-12-06 13:22:19.787888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.120 ms 00:27:33.037 [2024-12-06 13:22:19.787906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.801812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.801859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:33.037 [2024-12-06 13:22:19.801886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.792 ms 00:27:33.037 [2024-12-06 13:22:19.801906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.802996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.803210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:33.037 [2024-12-06 13:22:19.803246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:27:33.037 [2024-12-06 13:22:19.803281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.878637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.878944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:33.037 [2024-12-06 13:22:19.878986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.291 ms 00:27:33.037 [2024-12-06 13:22:19.879006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.890121] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:33.037 [2024-12-06 13:22:19.909497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.909837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:33.037 [2024-12-06 13:22:19.909880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.267 ms 00:27:33.037 [2024-12-06 13:22:19.909912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.910088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.910115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:33.037 [2024-12-06 13:22:19.910199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:33.037 [2024-12-06 13:22:19.910222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.910423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.910452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:33.037 [2024-12-06 13:22:19.910474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:33.037 [2024-12-06 13:22:19.910503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.910575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.910616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:33.037 [2024-12-06 13:22:19.910650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:33.037 [2024-12-06 13:22:19.910698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.910796] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:33.037 [2024-12-06 13:22:19.910837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.910864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:33.037 [2024-12-06 13:22:19.910884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:33.037 [2024-12-06 13:22:19.910902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.938354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.938399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:33.037 [2024-12-06 13:22:19.938425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.387 ms 00:27:33.037 [2024-12-06 13:22:19.938446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.037 [2024-12-06 13:22:19.938615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.037 [2024-12-06 13:22:19.938642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:33.037 [2024-12-06 13:22:19.938662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:33.038 [2024-12-06 13:22:19.938680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.038 [2024-12-06 13:22:19.940197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:33.038 [2024-12-06 13:22:19.943987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 363.308 ms, result 0 00:27:33.038 [2024-12-06 13:22:19.944990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.038 [2024-12-06 13:22:19.959695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:34.415  [2024-12-06T13:22:22.367Z] Copying: 25/256 [MB] (25 MBps) [2024-12-06T13:22:23.303Z] Copying: 47/256 [MB] (22 MBps) [2024-12-06T13:22:24.239Z] Copying: 70/256 [MB] (22 MBps) [2024-12-06T13:22:25.177Z] Copying: 93/256 [MB] (22 MBps) [2024-12-06T13:22:26.106Z] Copying: 116/256 [MB] (23 MBps) [2024-12-06T13:22:27.038Z] Copying: 139/256 [MB] (23 MBps) [2024-12-06T13:22:28.410Z] Copying: 163/256 [MB] (23 MBps) [2024-12-06T13:22:29.345Z] Copying: 187/256 [MB] (23 MBps) [2024-12-06T13:22:30.279Z] Copying: 211/256 [MB] (23 MBps) [2024-12-06T13:22:31.216Z] Copying: 235/256 [MB] (24 MBps) [2024-12-06T13:22:31.216Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-06 13:22:30.947531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:44.200 [2024-12-06 13:22:30.964813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:30.965035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:44.200 [2024-12-06 13:22:30.965244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:44.200 [2024-12-06 13:22:30.965424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:30.965542] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:44.200 [2024-12-06 13:22:30.969691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:30.969881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:44.200 [2024-12-06 13:22:30.970060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.860 ms 00:27:44.200 [2024-12-06 13:22:30.970262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:30.970685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:30.970738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:44.200 [2024-12-06 13:22:30.970779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:27:44.200 [2024-12-06 13:22:30.970814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:30.974781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:30.974825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:44.200 [2024-12-06 13:22:30.974849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:27:44.200 [2024-12-06 13:22:30.974869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:30.982467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:30.982515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:44.200 [2024-12-06 13:22:30.982543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.535 ms 00:27:44.200 [2024-12-06 13:22:30.982563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:31.013507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:31.013549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:44.200 [2024-12-06 13:22:31.013582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.841 ms 00:27:44.200 [2024-12-06 13:22:31.013593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:31.031576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:31.031621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:44.200 [2024-12-06 13:22:31.031657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.934 ms 00:27:44.200 [2024-12-06 13:22:31.031693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:31.031912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:31.031943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:44.200 [2024-12-06 13:22:31.031997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:27:44.200 [2024-12-06 13:22:31.032031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:31.063337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:31.063549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:44.200 [2024-12-06 13:22:31.063589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.271 ms 00:27:44.200 [2024-12-06 13:22:31.063612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:31.096005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:31.096057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:44.200 [2024-12-06 13:22:31.096083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.321 ms 00:27:44.200 [2024-12-06 13:22:31.096102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:31.127240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.200 [2024-12-06 13:22:31.127283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:44.200 [2024-12-06 13:22:31.127308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.005 ms 00:27:44.200 [2024-12-06 13:22:31.127343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.200 [2024-12-06 13:22:31.157890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.201 [2024-12-06 13:22:31.157950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:44.201 [2024-12-06 13:22:31.157977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.423 ms 00:27:44.201 [2024-12-06 13:22:31.158003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.201 [2024-12-06 13:22:31.158116] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:44.201 [2024-12-06 13:22:31.158191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.158988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.159988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.160007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.160028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.160047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:44.201 [2024-12-06 13:22:31.160067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:44.202 [2024-12-06 13:22:31.160435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:44.202 [2024-12-06 13:22:31.160458] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 971c63eb-6a00-4479-bdc9-d0eddd7420fb 00:27:44.202 [2024-12-06 13:22:31.160478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:44.202 [2024-12-06 13:22:31.160496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:44.202 [2024-12-06 13:22:31.160515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:44.202 [2024-12-06 13:22:31.160543] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:44.202 [2024-12-06 13:22:31.160570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:44.202 [2024-12-06 13:22:31.160602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:44.202 [2024-12-06 13:22:31.160639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:44.202 [2024-12-06 13:22:31.160657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:44.202 [2024-12-06 13:22:31.160674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:44.202 [2024-12-06 13:22:31.160694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.202 [2024-12-06 13:22:31.160714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:44.202 [2024-12-06 13:22:31.160735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:27:44.202 [2024-12-06 13:22:31.160753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.202 [2024-12-06 13:22:31.179587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.202 [2024-12-06 13:22:31.179630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:44.202 [2024-12-06 13:22:31.179664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.792 ms 00:27:44.202 [2024-12-06 13:22:31.179685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.202 [2024-12-06 13:22:31.180391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.202 [2024-12-06 13:22:31.180433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:44.202 [2024-12-06 13:22:31.180466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:27:44.202 [2024-12-06 13:22:31.180486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.229095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.229175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:44.461 [2024-12-06 13:22:31.229217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.229243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.229427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.229471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:44.461 [2024-12-06 13:22:31.229509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.229530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.229637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.229665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:44.461 [2024-12-06 13:22:31.229687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.229706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.229757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.229795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:44.461 [2024-12-06 13:22:31.229816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.229834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.337854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.338234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:44.461 [2024-12-06 13:22:31.338305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.338325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.423703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.423782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:44.461 [2024-12-06 13:22:31.423812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.423831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.423954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.423980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:44.461 [2024-12-06 13:22:31.424001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.424019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.424089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.424122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:44.461 [2024-12-06 13:22:31.424142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.424211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.424419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.424447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:44.461 [2024-12-06 13:22:31.424468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.424516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.424611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.424639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:44.461 [2024-12-06 13:22:31.424671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.424690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.424777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.424812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:44.461 [2024-12-06 13:22:31.424834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.424854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.424952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.461 [2024-12-06 13:22:31.424988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:44.461 [2024-12-06 13:22:31.425009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.461 [2024-12-06 13:22:31.425028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.461 [2024-12-06 13:22:31.425314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.472 ms, result 0 00:27:45.494 00:27:45.494 00:27:45.494 13:22:32 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:46.432 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:27:46.432 13:22:33 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:27:46.432 13:22:33 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:27:46.432 13:22:33 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:46.432 13:22:33 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:46.432 13:22:33 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:27:46.432 13:22:33 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:46.432 13:22:33 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78885 00:27:46.432 13:22:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78885 ']' 00:27:46.432 13:22:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78885 00:27:46.432 Process with pid 78885 is not found 00:27:46.432 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78885) - No such process 00:27:46.432 13:22:33 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78885 is not found' 00:27:46.432 00:27:46.432 real 1m12.542s 00:27:46.432 user 1m40.364s 00:27:46.432 sys 0m7.756s 00:27:46.432 ************************************ 00:27:46.432 END TEST ftl_trim 00:27:46.432 ************************************ 00:27:46.432 13:22:33 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.432 13:22:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:46.432 13:22:33 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:27:46.432 13:22:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:46.432 13:22:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.432 13:22:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:46.432 ************************************ 00:27:46.432 START TEST ftl_restore 00:27:46.432 ************************************ 00:27:46.432 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:27:46.432 * Looking for test storage... 00:27:46.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:46.432 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:46.432 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:46.432 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:27:46.432 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:27:46.432 13:22:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.691 13:22:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:27:46.691 13:22:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.691 13:22:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:27:46.691 13:22:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:27:46.691 13:22:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.691 13:22:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:27:46.692 13:22:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.692 13:22:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.692 13:22:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.692 13:22:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:46.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.692 --rc genhtml_branch_coverage=1 00:27:46.692 --rc genhtml_function_coverage=1 00:27:46.692 --rc genhtml_legend=1 00:27:46.692 --rc geninfo_all_blocks=1 00:27:46.692 --rc geninfo_unexecuted_blocks=1 00:27:46.692 00:27:46.692 ' 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:46.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.692 --rc genhtml_branch_coverage=1 00:27:46.692 --rc genhtml_function_coverage=1 00:27:46.692 --rc genhtml_legend=1 00:27:46.692 --rc geninfo_all_blocks=1 00:27:46.692 --rc geninfo_unexecuted_blocks=1 00:27:46.692 00:27:46.692 ' 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:46.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.692 --rc genhtml_branch_coverage=1 00:27:46.692 --rc genhtml_function_coverage=1 00:27:46.692 --rc genhtml_legend=1 00:27:46.692 --rc geninfo_all_blocks=1 00:27:46.692 --rc geninfo_unexecuted_blocks=1 00:27:46.692 00:27:46.692 ' 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:46.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.692 --rc genhtml_branch_coverage=1 00:27:46.692 --rc genhtml_function_coverage=1 00:27:46.692 --rc genhtml_legend=1 00:27:46.692 --rc geninfo_all_blocks=1 00:27:46.692 --rc geninfo_unexecuted_blocks=1 00:27:46.692 00:27:46.692 ' 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.KJSJA0OnS7 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79155 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:46.692 13:22:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79155 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79155 ']' 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.692 13:22:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:46.692 [2024-12-06 13:22:33.628063] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:27:46.692 [2024-12-06 13:22:33.628481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79155 ] 00:27:46.951 [2024-12-06 13:22:33.820474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.209 [2024-12-06 13:22:33.981111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.147 13:22:34 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.147 13:22:34 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:27:48.147 13:22:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:48.147 13:22:34 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:27:48.147 13:22:34 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:48.147 13:22:34 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:27:48.147 13:22:34 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:27:48.147 13:22:34 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:48.406 13:22:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:48.406 13:22:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:27:48.406 13:22:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:48.406 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:48.406 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:48.406 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:48.406 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:48.406 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:48.664 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:48.664 { 00:27:48.664 "name": "nvme0n1", 00:27:48.664 "aliases": [ 00:27:48.664 "e30ce9c9-2c92-4ced-b960-30a130412f4a" 00:27:48.664 ], 00:27:48.664 "product_name": "NVMe disk", 00:27:48.664 "block_size": 4096, 00:27:48.664 "num_blocks": 1310720, 00:27:48.664 "uuid": "e30ce9c9-2c92-4ced-b960-30a130412f4a", 00:27:48.664 "numa_id": -1, 00:27:48.664 "assigned_rate_limits": { 00:27:48.664 "rw_ios_per_sec": 0, 00:27:48.664 "rw_mbytes_per_sec": 0, 00:27:48.664 "r_mbytes_per_sec": 0, 00:27:48.664 "w_mbytes_per_sec": 0 00:27:48.664 }, 00:27:48.664 "claimed": true, 00:27:48.664 "claim_type": "read_many_write_one", 00:27:48.664 "zoned": false, 00:27:48.664 "supported_io_types": { 00:27:48.664 "read": true, 00:27:48.664 "write": true, 00:27:48.664 "unmap": true, 00:27:48.664 "flush": true, 00:27:48.664 "reset": true, 00:27:48.664 "nvme_admin": true, 00:27:48.664 "nvme_io": true, 00:27:48.664 "nvme_io_md": false, 00:27:48.664 "write_zeroes": true, 00:27:48.664 "zcopy": false, 00:27:48.664 "get_zone_info": false, 00:27:48.664 "zone_management": false, 00:27:48.664 "zone_append": false, 00:27:48.664 "compare": true, 00:27:48.665 "compare_and_write": false, 00:27:48.665 "abort": true, 00:27:48.665 "seek_hole": false, 00:27:48.665 "seek_data": false, 00:27:48.665 "copy": true, 00:27:48.665 "nvme_iov_md": false 00:27:48.665 }, 00:27:48.665 "driver_specific": { 00:27:48.665 "nvme": [ 00:27:48.665 { 00:27:48.665 "pci_address": "0000:00:11.0", 00:27:48.665 "trid": { 00:27:48.665 "trtype": "PCIe", 00:27:48.665 "traddr": "0000:00:11.0" 00:27:48.665 }, 00:27:48.665 "ctrlr_data": { 00:27:48.665 "cntlid": 0, 00:27:48.665 "vendor_id": "0x1b36", 00:27:48.665 "model_number": "QEMU NVMe Ctrl", 00:27:48.665 "serial_number": "12341", 00:27:48.665 "firmware_revision": "8.0.0", 00:27:48.665 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:48.665 "oacs": { 00:27:48.665 "security": 0, 00:27:48.665 "format": 1, 00:27:48.665 "firmware": 0, 00:27:48.665 "ns_manage": 1 00:27:48.665 }, 00:27:48.665 "multi_ctrlr": false, 00:27:48.665 "ana_reporting": false 00:27:48.665 }, 00:27:48.665 "vs": { 00:27:48.665 "nvme_version": "1.4" 00:27:48.665 }, 00:27:48.665 "ns_data": { 00:27:48.665 "id": 1, 00:27:48.665 "can_share": false 00:27:48.665 } 00:27:48.665 } 00:27:48.665 ], 00:27:48.665 "mp_policy": "active_passive" 00:27:48.665 } 00:27:48.665 } 00:27:48.665 ]' 00:27:48.665 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:48.665 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:48.665 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:48.665 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:48.665 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:48.665 13:22:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:27:48.665 13:22:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:27:48.665 13:22:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:48.665 13:22:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:27:48.665 13:22:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:48.665 13:22:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:48.922 13:22:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d38fcbc9-594f-4f1c-8555-a09f64d2fb1c 00:27:48.922 13:22:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:27:48.922 13:22:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d38fcbc9-594f-4f1c-8555-a09f64d2fb1c 00:27:49.180 13:22:36 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=09c5d382-45b8-447b-8f8d-f1d5c58dd77d 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 09c5d382-45b8-447b-8f8d-f1d5c58dd77d 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:49.747 13:22:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:27:50.006 13:22:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:50.006 13:22:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:50.006 13:22:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:50.006 13:22:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:50.006 13:22:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:50.006 13:22:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:50.006 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:50.006 { 00:27:50.006 "name": "6ebf9e39-bd91-464d-9531-a9a62a4a6fa1", 00:27:50.006 "aliases": [ 00:27:50.006 "lvs/nvme0n1p0" 00:27:50.006 ], 00:27:50.006 "product_name": "Logical Volume", 00:27:50.006 "block_size": 4096, 00:27:50.006 "num_blocks": 26476544, 00:27:50.006 "uuid": "6ebf9e39-bd91-464d-9531-a9a62a4a6fa1", 00:27:50.006 "assigned_rate_limits": { 00:27:50.006 "rw_ios_per_sec": 0, 00:27:50.006 "rw_mbytes_per_sec": 0, 00:27:50.006 "r_mbytes_per_sec": 0, 00:27:50.006 "w_mbytes_per_sec": 0 00:27:50.006 }, 00:27:50.006 "claimed": false, 00:27:50.006 "zoned": false, 00:27:50.006 "supported_io_types": { 00:27:50.006 "read": true, 00:27:50.006 "write": true, 00:27:50.006 "unmap": true, 00:27:50.006 "flush": false, 00:27:50.006 "reset": true, 00:27:50.006 "nvme_admin": false, 00:27:50.006 "nvme_io": false, 00:27:50.006 "nvme_io_md": false, 00:27:50.006 "write_zeroes": true, 00:27:50.006 "zcopy": false, 00:27:50.006 "get_zone_info": false, 00:27:50.006 "zone_management": false, 00:27:50.006 "zone_append": false, 00:27:50.006 "compare": false, 00:27:50.006 "compare_and_write": false, 00:27:50.006 "abort": false, 00:27:50.006 "seek_hole": true, 00:27:50.006 "seek_data": true, 00:27:50.006 "copy": false, 00:27:50.006 "nvme_iov_md": false 00:27:50.006 }, 00:27:50.006 "driver_specific": { 00:27:50.006 "lvol": { 00:27:50.006 "lvol_store_uuid": "09c5d382-45b8-447b-8f8d-f1d5c58dd77d", 00:27:50.006 "base_bdev": "nvme0n1", 00:27:50.006 "thin_provision": true, 00:27:50.006 "num_allocated_clusters": 0, 00:27:50.006 "snapshot": false, 00:27:50.006 "clone": false, 00:27:50.006 "esnap_clone": false 00:27:50.006 } 00:27:50.006 } 00:27:50.006 } 00:27:50.006 ]' 00:27:50.006 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:50.265 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:50.265 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:50.265 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:50.265 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:50.265 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:27:50.265 13:22:37 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:27:50.265 13:22:37 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:27:50.265 13:22:37 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:50.523 13:22:37 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:50.523 13:22:37 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:50.523 13:22:37 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:50.523 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:50.523 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:50.523 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:50.523 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:50.523 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:50.781 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:50.781 { 00:27:50.781 "name": "6ebf9e39-bd91-464d-9531-a9a62a4a6fa1", 00:27:50.781 "aliases": [ 00:27:50.781 "lvs/nvme0n1p0" 00:27:50.781 ], 00:27:50.781 "product_name": "Logical Volume", 00:27:50.781 "block_size": 4096, 00:27:50.781 "num_blocks": 26476544, 00:27:50.781 "uuid": "6ebf9e39-bd91-464d-9531-a9a62a4a6fa1", 00:27:50.781 "assigned_rate_limits": { 00:27:50.781 "rw_ios_per_sec": 0, 00:27:50.781 "rw_mbytes_per_sec": 0, 00:27:50.781 "r_mbytes_per_sec": 0, 00:27:50.781 "w_mbytes_per_sec": 0 00:27:50.781 }, 00:27:50.781 "claimed": false, 00:27:50.781 "zoned": false, 00:27:50.781 "supported_io_types": { 00:27:50.781 "read": true, 00:27:50.781 "write": true, 00:27:50.781 "unmap": true, 00:27:50.781 "flush": false, 00:27:50.781 "reset": true, 00:27:50.781 "nvme_admin": false, 00:27:50.781 "nvme_io": false, 00:27:50.781 "nvme_io_md": false, 00:27:50.781 "write_zeroes": true, 00:27:50.781 "zcopy": false, 00:27:50.781 "get_zone_info": false, 00:27:50.781 "zone_management": false, 00:27:50.781 "zone_append": false, 00:27:50.781 "compare": false, 00:27:50.781 "compare_and_write": false, 00:27:50.781 "abort": false, 00:27:50.781 "seek_hole": true, 00:27:50.781 "seek_data": true, 00:27:50.781 "copy": false, 00:27:50.781 "nvme_iov_md": false 00:27:50.781 }, 00:27:50.781 "driver_specific": { 00:27:50.781 "lvol": { 00:27:50.781 "lvol_store_uuid": "09c5d382-45b8-447b-8f8d-f1d5c58dd77d", 00:27:50.781 "base_bdev": "nvme0n1", 00:27:50.781 "thin_provision": true, 00:27:50.781 "num_allocated_clusters": 0, 00:27:50.781 "snapshot": false, 00:27:50.781 "clone": false, 00:27:50.781 "esnap_clone": false 00:27:50.781 } 00:27:50.781 } 00:27:50.781 } 00:27:50.781 ]' 00:27:50.781 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:50.781 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:50.781 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:51.039 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:51.039 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:51.039 13:22:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:27:51.039 13:22:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:27:51.039 13:22:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:51.297 13:22:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:51.298 13:22:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:51.298 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:51.298 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:51.298 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:51.298 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:51.298 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 00:27:51.556 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:51.556 { 00:27:51.556 "name": "6ebf9e39-bd91-464d-9531-a9a62a4a6fa1", 00:27:51.556 "aliases": [ 00:27:51.556 "lvs/nvme0n1p0" 00:27:51.556 ], 00:27:51.556 "product_name": "Logical Volume", 00:27:51.556 "block_size": 4096, 00:27:51.556 "num_blocks": 26476544, 00:27:51.556 "uuid": "6ebf9e39-bd91-464d-9531-a9a62a4a6fa1", 00:27:51.556 "assigned_rate_limits": { 00:27:51.556 "rw_ios_per_sec": 0, 00:27:51.556 "rw_mbytes_per_sec": 0, 00:27:51.557 "r_mbytes_per_sec": 0, 00:27:51.557 "w_mbytes_per_sec": 0 00:27:51.557 }, 00:27:51.557 "claimed": false, 00:27:51.557 "zoned": false, 00:27:51.557 "supported_io_types": { 00:27:51.557 "read": true, 00:27:51.557 "write": true, 00:27:51.557 "unmap": true, 00:27:51.557 "flush": false, 00:27:51.557 "reset": true, 00:27:51.557 "nvme_admin": false, 00:27:51.557 "nvme_io": false, 00:27:51.557 "nvme_io_md": false, 00:27:51.557 "write_zeroes": true, 00:27:51.557 "zcopy": false, 00:27:51.557 "get_zone_info": false, 00:27:51.557 "zone_management": false, 00:27:51.557 "zone_append": false, 00:27:51.557 "compare": false, 00:27:51.557 "compare_and_write": false, 00:27:51.557 "abort": false, 00:27:51.557 "seek_hole": true, 00:27:51.557 "seek_data": true, 00:27:51.557 "copy": false, 00:27:51.557 "nvme_iov_md": false 00:27:51.557 }, 00:27:51.557 "driver_specific": { 00:27:51.557 "lvol": { 00:27:51.557 "lvol_store_uuid": "09c5d382-45b8-447b-8f8d-f1d5c58dd77d", 00:27:51.557 "base_bdev": "nvme0n1", 00:27:51.557 "thin_provision": true, 00:27:51.557 "num_allocated_clusters": 0, 00:27:51.557 "snapshot": false, 00:27:51.557 "clone": false, 00:27:51.557 "esnap_clone": false 00:27:51.557 } 00:27:51.557 } 00:27:51.557 } 00:27:51.557 ]' 00:27:51.557 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:51.557 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:51.557 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:51.557 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:51.557 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:51.557 13:22:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:27:51.557 13:22:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:51.557 13:22:38 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 --l2p_dram_limit 10' 00:27:51.557 13:22:38 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:51.557 13:22:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:51.557 13:22:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:51.557 13:22:38 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:27:51.557 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:27:51.557 13:22:38 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6ebf9e39-bd91-464d-9531-a9a62a4a6fa1 --l2p_dram_limit 10 -c nvc0n1p0 00:27:51.829 [2024-12-06 13:22:38.704667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.704763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:51.829 [2024-12-06 13:22:38.704803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:51.829 [2024-12-06 13:22:38.704828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.704932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.704950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:51.829 [2024-12-06 13:22:38.704964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:51.829 [2024-12-06 13:22:38.704976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.705014] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:51.829 [2024-12-06 13:22:38.706119] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:51.829 [2024-12-06 13:22:38.706207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.706222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.829 [2024-12-06 13:22:38.706239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:27:51.829 [2024-12-06 13:22:38.706279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.706521] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 25e71a3f-895e-4526-801e-79081fb50ab9 00:27:51.829 [2024-12-06 13:22:38.708664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.708721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:51.829 [2024-12-06 13:22:38.708753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:51.829 [2024-12-06 13:22:38.708778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.719222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.719284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.829 [2024-12-06 13:22:38.719335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.347 ms 00:27:51.829 [2024-12-06 13:22:38.719349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.719469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.719491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.829 [2024-12-06 13:22:38.719504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:51.829 [2024-12-06 13:22:38.719522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.719640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.719670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:51.829 [2024-12-06 13:22:38.719687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:51.829 [2024-12-06 13:22:38.719701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.719735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:51.829 [2024-12-06 13:22:38.724865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.829 [2024-12-06 13:22:38.724918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.829 [2024-12-06 13:22:38.724954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:27:51.829 [2024-12-06 13:22:38.724966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.829 [2024-12-06 13:22:38.725013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.830 [2024-12-06 13:22:38.725028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:51.830 [2024-12-06 13:22:38.725043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:51.830 [2024-12-06 13:22:38.725054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.830 [2024-12-06 13:22:38.725101] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:51.830 [2024-12-06 13:22:38.725326] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:51.830 [2024-12-06 13:22:38.725354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:51.830 [2024-12-06 13:22:38.725370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:51.830 [2024-12-06 13:22:38.725387] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:51.830 [2024-12-06 13:22:38.725401] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:51.830 [2024-12-06 13:22:38.725416] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:51.830 [2024-12-06 13:22:38.725429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:51.830 [2024-12-06 13:22:38.725444] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:51.830 [2024-12-06 13:22:38.725455] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:51.830 [2024-12-06 13:22:38.725469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.830 [2024-12-06 13:22:38.725493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:51.830 [2024-12-06 13:22:38.725509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:27:51.830 [2024-12-06 13:22:38.725520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.830 [2024-12-06 13:22:38.725619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.830 [2024-12-06 13:22:38.725634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:51.830 [2024-12-06 13:22:38.725649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:51.830 [2024-12-06 13:22:38.725663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.830 [2024-12-06 13:22:38.725773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:51.830 [2024-12-06 13:22:38.725794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:51.830 [2024-12-06 13:22:38.725810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.830 [2024-12-06 13:22:38.725822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.830 [2024-12-06 13:22:38.725837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:51.830 [2024-12-06 13:22:38.725847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:51.830 [2024-12-06 13:22:38.725860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:51.830 [2024-12-06 13:22:38.725871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:51.830 [2024-12-06 13:22:38.725884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:51.830 [2024-12-06 13:22:38.725894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.830 [2024-12-06 13:22:38.725907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:51.830 [2024-12-06 13:22:38.725917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:51.830 [2024-12-06 13:22:38.725932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.830 [2024-12-06 13:22:38.725942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:51.830 [2024-12-06 13:22:38.725955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:51.830 [2024-12-06 13:22:38.725965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.830 [2024-12-06 13:22:38.725980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:51.830 [2024-12-06 13:22:38.725992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:51.830 [2024-12-06 13:22:38.726005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:51.830 [2024-12-06 13:22:38.726028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.830 [2024-12-06 13:22:38.726052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:51.830 [2024-12-06 13:22:38.726062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.830 [2024-12-06 13:22:38.726085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:51.830 [2024-12-06 13:22:38.726097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.830 [2024-12-06 13:22:38.726120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:51.830 [2024-12-06 13:22:38.726145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.830 [2024-12-06 13:22:38.726170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:51.830 [2024-12-06 13:22:38.726185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.830 [2024-12-06 13:22:38.726209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:51.830 [2024-12-06 13:22:38.726219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:51.830 [2024-12-06 13:22:38.726232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.830 [2024-12-06 13:22:38.726243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:51.830 [2024-12-06 13:22:38.726287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:51.830 [2024-12-06 13:22:38.726299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:51.830 [2024-12-06 13:22:38.726323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:51.830 [2024-12-06 13:22:38.726337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726347] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:51.830 [2024-12-06 13:22:38.726360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:51.830 [2024-12-06 13:22:38.726379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.830 [2024-12-06 13:22:38.726392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.830 [2024-12-06 13:22:38.726404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:51.830 [2024-12-06 13:22:38.726421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:51.830 [2024-12-06 13:22:38.726433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:51.830 [2024-12-06 13:22:38.726447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:51.830 [2024-12-06 13:22:38.726458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:51.830 [2024-12-06 13:22:38.726471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:51.830 [2024-12-06 13:22:38.726484] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:51.830 [2024-12-06 13:22:38.726505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.830 [2024-12-06 13:22:38.726518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:51.830 [2024-12-06 13:22:38.726533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:51.830 [2024-12-06 13:22:38.726544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:51.830 [2024-12-06 13:22:38.726558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:51.830 [2024-12-06 13:22:38.726569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:51.830 [2024-12-06 13:22:38.726583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:51.830 [2024-12-06 13:22:38.726595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:51.830 [2024-12-06 13:22:38.726609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:51.830 [2024-12-06 13:22:38.726620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:51.830 [2024-12-06 13:22:38.726639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:51.830 [2024-12-06 13:22:38.726650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:51.830 [2024-12-06 13:22:38.726664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:51.830 [2024-12-06 13:22:38.726676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:51.830 [2024-12-06 13:22:38.726690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:51.830 [2024-12-06 13:22:38.726702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:51.830 [2024-12-06 13:22:38.726717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.830 [2024-12-06 13:22:38.726730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:51.830 [2024-12-06 13:22:38.726744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:51.830 [2024-12-06 13:22:38.726756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:51.830 [2024-12-06 13:22:38.726771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:51.830 [2024-12-06 13:22:38.726783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.831 [2024-12-06 13:22:38.726797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:51.831 [2024-12-06 13:22:38.726810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:27:51.831 [2024-12-06 13:22:38.726824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.831 [2024-12-06 13:22:38.726882] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:51.831 [2024-12-06 13:22:38.726919] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:55.174 [2024-12-06 13:22:41.485330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.485445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:55.174 [2024-12-06 13:22:41.485483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2758.462 ms 00:27:55.174 [2024-12-06 13:22:41.485499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.526958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.527057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.174 [2024-12-06 13:22:41.527077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.088 ms 00:27:55.174 [2024-12-06 13:22:41.527092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.527323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.527349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:55.174 [2024-12-06 13:22:41.527382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:55.174 [2024-12-06 13:22:41.527415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.569237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.569333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.174 [2024-12-06 13:22:41.569352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.739 ms 00:27:55.174 [2024-12-06 13:22:41.569369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.569418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.569437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.174 [2024-12-06 13:22:41.569449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:55.174 [2024-12-06 13:22:41.569475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.570235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.570305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.174 [2024-12-06 13:22:41.570336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:27:55.174 [2024-12-06 13:22:41.570350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.570522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.570546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.174 [2024-12-06 13:22:41.570559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:27:55.174 [2024-12-06 13:22:41.570586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.590635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.590699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:55.174 [2024-12-06 13:22:41.590732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.016 ms 00:27:55.174 [2024-12-06 13:22:41.590746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.617415] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:55.174 [2024-12-06 13:22:41.622082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.622177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:55.174 [2024-12-06 13:22:41.622199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.224 ms 00:27:55.174 [2024-12-06 13:22:41.622211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.694057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.694174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:55.174 [2024-12-06 13:22:41.694200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.790 ms 00:27:55.174 [2024-12-06 13:22:41.694212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.694474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.694510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:55.174 [2024-12-06 13:22:41.694547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:27:55.174 [2024-12-06 13:22:41.694559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.721482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.721541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:55.174 [2024-12-06 13:22:41.721578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.847 ms 00:27:55.174 [2024-12-06 13:22:41.721593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.747922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.747996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:55.174 [2024-12-06 13:22:41.748033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.278 ms 00:27:55.174 [2024-12-06 13:22:41.748044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.749023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.749068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:55.174 [2024-12-06 13:22:41.749101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:27:55.174 [2024-12-06 13:22:41.749112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.828011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.828077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:55.174 [2024-12-06 13:22:41.828117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.789 ms 00:27:55.174 [2024-12-06 13:22:41.828129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.857961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.858019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:55.174 [2024-12-06 13:22:41.858055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.722 ms 00:27:55.174 [2024-12-06 13:22:41.858068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.888103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.888185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:55.174 [2024-12-06 13:22:41.888222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.983 ms 00:27:55.174 [2024-12-06 13:22:41.888233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.916527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.916611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:55.174 [2024-12-06 13:22:41.916657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.243 ms 00:27:55.174 [2024-12-06 13:22:41.916669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.174 [2024-12-06 13:22:41.916738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.174 [2024-12-06 13:22:41.916756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:55.174 [2024-12-06 13:22:41.916785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:55.174 [2024-12-06 13:22:41.916796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.175 [2024-12-06 13:22:41.916950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.175 [2024-12-06 13:22:41.916978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:55.175 [2024-12-06 13:22:41.916995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:55.175 [2024-12-06 13:22:41.917007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.175 [2024-12-06 13:22:41.918500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3213.205 ms, result 0 00:27:55.175 { 00:27:55.175 "name": "ftl0", 00:27:55.175 "uuid": "25e71a3f-895e-4526-801e-79081fb50ab9" 00:27:55.175 } 00:27:55.175 13:22:41 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:55.175 13:22:41 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:55.433 13:22:42 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:27:55.433 13:22:42 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:55.693 [2024-12-06 13:22:42.461697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.461825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:55.693 [2024-12-06 13:22:42.461846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:55.693 [2024-12-06 13:22:42.461862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.461918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:55.693 [2024-12-06 13:22:42.465860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.465926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:55.693 [2024-12-06 13:22:42.465961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.912 ms 00:27:55.693 [2024-12-06 13:22:42.465972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.466385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.466415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:55.693 [2024-12-06 13:22:42.466433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:27:55.693 [2024-12-06 13:22:42.466445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.469398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.469426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:55.693 [2024-12-06 13:22:42.469459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:27:55.693 [2024-12-06 13:22:42.469470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.475634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.475692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:55.693 [2024-12-06 13:22:42.475725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.133 ms 00:27:55.693 [2024-12-06 13:22:42.475743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.507706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.507803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:55.693 [2024-12-06 13:22:42.507827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.857 ms 00:27:55.693 [2024-12-06 13:22:42.507840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.529190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.529288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:55.693 [2024-12-06 13:22:42.529330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.237 ms 00:27:55.693 [2024-12-06 13:22:42.529343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.529607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.529638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:55.693 [2024-12-06 13:22:42.529657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:27:55.693 [2024-12-06 13:22:42.529674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.559440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.559535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:55.693 [2024-12-06 13:22:42.559574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.728 ms 00:27:55.693 [2024-12-06 13:22:42.559586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.588469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.588560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:55.693 [2024-12-06 13:22:42.588598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.779 ms 00:27:55.693 [2024-12-06 13:22:42.588611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.616824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.616902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:55.693 [2024-12-06 13:22:42.616921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.150 ms 00:27:55.693 [2024-12-06 13:22:42.616932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.645643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.693 [2024-12-06 13:22:42.645707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:55.693 [2024-12-06 13:22:42.645742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.575 ms 00:27:55.693 [2024-12-06 13:22:42.645755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.693 [2024-12-06 13:22:42.645806] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:55.693 [2024-12-06 13:22:42.645833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:55.693 [2024-12-06 13:22:42.645849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:55.693 [2024-12-06 13:22:42.645877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:55.693 [2024-12-06 13:22:42.645892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.645904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.645928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.645941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.645958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.645970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.645984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.645996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.646992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:55.694 [2024-12-06 13:22:42.647232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:55.695 [2024-12-06 13:22:42.647381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:55.695 [2024-12-06 13:22:42.647395] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25e71a3f-895e-4526-801e-79081fb50ab9 00:27:55.695 [2024-12-06 13:22:42.647408] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:55.695 [2024-12-06 13:22:42.647428] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:55.695 [2024-12-06 13:22:42.647438] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:55.695 [2024-12-06 13:22:42.647452] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:55.695 [2024-12-06 13:22:42.647463] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:55.695 [2024-12-06 13:22:42.647476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:55.695 [2024-12-06 13:22:42.647487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:55.695 [2024-12-06 13:22:42.647499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:55.695 [2024-12-06 13:22:42.647509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:55.695 [2024-12-06 13:22:42.647523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.695 [2024-12-06 13:22:42.647534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:55.695 [2024-12-06 13:22:42.647551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:27:55.695 [2024-12-06 13:22:42.647566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.695 [2024-12-06 13:22:42.664215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.695 [2024-12-06 13:22:42.664257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:55.695 [2024-12-06 13:22:42.664292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.560 ms 00:27:55.695 [2024-12-06 13:22:42.664303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.695 [2024-12-06 13:22:42.664818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.695 [2024-12-06 13:22:42.664851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:55.695 [2024-12-06 13:22:42.664879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:27:55.695 [2024-12-06 13:22:42.664891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.716905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.716978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:55.953 [2024-12-06 13:22:42.717015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.717026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.717111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.717130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.953 [2024-12-06 13:22:42.717181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.717195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.717331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.717364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.953 [2024-12-06 13:22:42.717382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.717393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.717428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.717447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.953 [2024-12-06 13:22:42.717466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.717477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.812111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.812214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.953 [2024-12-06 13:22:42.812262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.812286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.895481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.895624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.953 [2024-12-06 13:22:42.895663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.895676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.895824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.895844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:55.953 [2024-12-06 13:22:42.895861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.895881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.895970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.895987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:55.953 [2024-12-06 13:22:42.896005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.896020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.896180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.896213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:55.953 [2024-12-06 13:22:42.896229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.896242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.896326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.896345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:55.953 [2024-12-06 13:22:42.896360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.896371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.896428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.896444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:55.953 [2024-12-06 13:22:42.896474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.896486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.896550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.953 [2024-12-06 13:22:42.896567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:55.953 [2024-12-06 13:22:42.896583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.953 [2024-12-06 13:22:42.896594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.953 [2024-12-06 13:22:42.896771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 435.038 ms, result 0 00:27:55.953 true 00:27:55.953 13:22:42 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79155 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79155 ']' 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79155 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79155 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.953 killing process with pid 79155 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79155' 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79155 00:27:55.953 13:22:42 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79155 00:28:01.239 13:22:47 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:06.510 262144+0 records in 00:28:06.510 262144+0 records out 00:28:06.510 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.87858 s, 220 MB/s 00:28:06.510 13:22:52 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:07.885 13:22:54 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:07.885 [2024-12-06 13:22:54.806618] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:28:07.885 [2024-12-06 13:22:54.806779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79403 ] 00:28:08.144 [2024-12-06 13:22:54.994465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.144 [2024-12-06 13:22:55.128808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.712 [2024-12-06 13:22:55.499699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:08.712 [2024-12-06 13:22:55.499804] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:08.712 [2024-12-06 13:22:55.669764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.669826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:08.712 [2024-12-06 13:22:55.669879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:08.712 [2024-12-06 13:22:55.669892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.669959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.669980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:08.712 [2024-12-06 13:22:55.669993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:08.712 [2024-12-06 13:22:55.670004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.670035] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:08.712 [2024-12-06 13:22:55.670938] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:08.712 [2024-12-06 13:22:55.670985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.671000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:08.712 [2024-12-06 13:22:55.671013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:28:08.712 [2024-12-06 13:22:55.671024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.673044] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:08.712 [2024-12-06 13:22:55.689268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.689326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:08.712 [2024-12-06 13:22:55.689360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.225 ms 00:28:08.712 [2024-12-06 13:22:55.689372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.689446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.689464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:08.712 [2024-12-06 13:22:55.689477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:08.712 [2024-12-06 13:22:55.689487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.698406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.698450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:08.712 [2024-12-06 13:22:55.698483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.812 ms 00:28:08.712 [2024-12-06 13:22:55.698501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.698598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.698617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:08.712 [2024-12-06 13:22:55.698630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:08.712 [2024-12-06 13:22:55.698656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.698731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.698748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:08.712 [2024-12-06 13:22:55.698761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:08.712 [2024-12-06 13:22:55.698772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.698816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:08.712 [2024-12-06 13:22:55.703762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.703814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:08.712 [2024-12-06 13:22:55.703851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.959 ms 00:28:08.712 [2024-12-06 13:22:55.703862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.703902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.712 [2024-12-06 13:22:55.703917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:08.712 [2024-12-06 13:22:55.703929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:08.712 [2024-12-06 13:22:55.703948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.712 [2024-12-06 13:22:55.704012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:08.712 [2024-12-06 13:22:55.704063] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:08.712 [2024-12-06 13:22:55.704105] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:08.712 [2024-12-06 13:22:55.704129] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:08.712 [2024-12-06 13:22:55.704261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:08.712 [2024-12-06 13:22:55.704280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:08.712 [2024-12-06 13:22:55.704295] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:08.712 [2024-12-06 13:22:55.704310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:08.712 [2024-12-06 13:22:55.704324] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:08.712 [2024-12-06 13:22:55.704336] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:08.713 [2024-12-06 13:22:55.704347] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:08.713 [2024-12-06 13:22:55.704363] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:08.713 [2024-12-06 13:22:55.704374] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:08.713 [2024-12-06 13:22:55.704392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.713 [2024-12-06 13:22:55.704404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:08.713 [2024-12-06 13:22:55.704416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:28:08.713 [2024-12-06 13:22:55.704426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.713 [2024-12-06 13:22:55.704526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.713 [2024-12-06 13:22:55.704542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:08.713 [2024-12-06 13:22:55.704554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:08.713 [2024-12-06 13:22:55.704565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.713 [2024-12-06 13:22:55.704688] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:08.713 [2024-12-06 13:22:55.704719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:08.713 [2024-12-06 13:22:55.704733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:08.713 [2024-12-06 13:22:55.704744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:08.713 [2024-12-06 13:22:55.704756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:08.713 [2024-12-06 13:22:55.704766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:08.713 [2024-12-06 13:22:55.704776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:08.713 [2024-12-06 13:22:55.704786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:08.713 [2024-12-06 13:22:55.704798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:08.713 [2024-12-06 13:22:55.704809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:08.713 [2024-12-06 13:22:55.704820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:08.713 [2024-12-06 13:22:55.704830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:08.713 [2024-12-06 13:22:55.704840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:08.713 [2024-12-06 13:22:55.704863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:08.713 [2024-12-06 13:22:55.704874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:08.713 [2024-12-06 13:22:55.704887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:08.713 [2024-12-06 13:22:55.704898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:08.713 [2024-12-06 13:22:55.704909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:08.713 [2024-12-06 13:22:55.704919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:08.713 [2024-12-06 13:22:55.704930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:08.713 [2024-12-06 13:22:55.704941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:08.713 [2024-12-06 13:22:55.704951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:08.713 [2024-12-06 13:22:55.704962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:08.713 [2024-12-06 13:22:55.704972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:08.713 [2024-12-06 13:22:55.704983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:08.713 [2024-12-06 13:22:55.704994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:08.713 [2024-12-06 13:22:55.705004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:08.713 [2024-12-06 13:22:55.705014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:08.713 [2024-12-06 13:22:55.705024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:08.713 [2024-12-06 13:22:55.705035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:08.713 [2024-12-06 13:22:55.705046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:08.713 [2024-12-06 13:22:55.705056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:08.713 [2024-12-06 13:22:55.705067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:08.713 [2024-12-06 13:22:55.705077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:08.713 [2024-12-06 13:22:55.705095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:08.713 [2024-12-06 13:22:55.705106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:08.713 [2024-12-06 13:22:55.705122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:08.713 [2024-12-06 13:22:55.705155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:08.713 [2024-12-06 13:22:55.705169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:08.713 [2024-12-06 13:22:55.705180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:08.713 [2024-12-06 13:22:55.705190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:08.713 [2024-12-06 13:22:55.705201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:08.713 [2024-12-06 13:22:55.705212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:08.713 [2024-12-06 13:22:55.705222] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:08.713 [2024-12-06 13:22:55.705234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:08.713 [2024-12-06 13:22:55.705245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:08.713 [2024-12-06 13:22:55.705256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:08.713 [2024-12-06 13:22:55.705269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:08.713 [2024-12-06 13:22:55.705281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:08.713 [2024-12-06 13:22:55.705292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:08.713 [2024-12-06 13:22:55.705303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:08.713 [2024-12-06 13:22:55.705313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:08.713 [2024-12-06 13:22:55.705324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:08.713 [2024-12-06 13:22:55.705336] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:08.713 [2024-12-06 13:22:55.705350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:08.713 [2024-12-06 13:22:55.705368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:08.713 [2024-12-06 13:22:55.705381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:08.713 [2024-12-06 13:22:55.705392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:08.713 [2024-12-06 13:22:55.705403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:08.713 [2024-12-06 13:22:55.705414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:08.713 [2024-12-06 13:22:55.705425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:08.713 [2024-12-06 13:22:55.705437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:08.713 [2024-12-06 13:22:55.705448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:08.713 [2024-12-06 13:22:55.705460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:08.713 [2024-12-06 13:22:55.705471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:08.713 [2024-12-06 13:22:55.705482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:08.713 [2024-12-06 13:22:55.705494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:08.713 [2024-12-06 13:22:55.705505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:08.713 [2024-12-06 13:22:55.705517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:08.713 [2024-12-06 13:22:55.705528] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:08.713 [2024-12-06 13:22:55.705541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:08.713 [2024-12-06 13:22:55.705553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:08.713 [2024-12-06 13:22:55.705565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:08.713 [2024-12-06 13:22:55.705581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:08.713 [2024-12-06 13:22:55.705593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:08.713 [2024-12-06 13:22:55.705605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.713 [2024-12-06 13:22:55.705616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:08.713 [2024-12-06 13:22:55.705629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:28:08.713 [2024-12-06 13:22:55.705641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.972 [2024-12-06 13:22:55.748261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.972 [2024-12-06 13:22:55.748326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:08.972 [2024-12-06 13:22:55.748350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.552 ms 00:28:08.972 [2024-12-06 13:22:55.748368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.972 [2024-12-06 13:22:55.748494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.972 [2024-12-06 13:22:55.748509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:08.972 [2024-12-06 13:22:55.748523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:28:08.972 [2024-12-06 13:22:55.748534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.972 [2024-12-06 13:22:55.806422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.972 [2024-12-06 13:22:55.806493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:08.972 [2024-12-06 13:22:55.806514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.782 ms 00:28:08.972 [2024-12-06 13:22:55.806526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.972 [2024-12-06 13:22:55.806611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.972 [2024-12-06 13:22:55.806629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:08.972 [2024-12-06 13:22:55.806655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:08.972 [2024-12-06 13:22:55.806666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.972 [2024-12-06 13:22:55.807357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.972 [2024-12-06 13:22:55.807385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:08.972 [2024-12-06 13:22:55.807400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:28:08.972 [2024-12-06 13:22:55.807412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.972 [2024-12-06 13:22:55.807601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.972 [2024-12-06 13:22:55.807621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:08.972 [2024-12-06 13:22:55.807646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:28:08.972 [2024-12-06 13:22:55.807657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.972 [2024-12-06 13:22:55.828556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.973 [2024-12-06 13:22:55.828618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:08.973 [2024-12-06 13:22:55.828636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.869 ms 00:28:08.973 [2024-12-06 13:22:55.828649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.973 [2024-12-06 13:22:55.846011] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:08.973 [2024-12-06 13:22:55.846077] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:08.973 [2024-12-06 13:22:55.846113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.973 [2024-12-06 13:22:55.846133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:08.973 [2024-12-06 13:22:55.846157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.305 ms 00:28:08.973 [2024-12-06 13:22:55.846172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.973 [2024-12-06 13:22:55.875417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.973 [2024-12-06 13:22:55.875502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:08.973 [2024-12-06 13:22:55.875537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.187 ms 00:28:08.973 [2024-12-06 13:22:55.875552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.973 [2024-12-06 13:22:55.890634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.973 [2024-12-06 13:22:55.890686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:08.973 [2024-12-06 13:22:55.890702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.013 ms 00:28:08.973 [2024-12-06 13:22:55.890714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.973 [2024-12-06 13:22:55.905695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.973 [2024-12-06 13:22:55.905753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:08.973 [2024-12-06 13:22:55.905770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.937 ms 00:28:08.973 [2024-12-06 13:22:55.905782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.973 [2024-12-06 13:22:55.906734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.973 [2024-12-06 13:22:55.906787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:08.973 [2024-12-06 13:22:55.906807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:28:08.973 [2024-12-06 13:22:55.906829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.973 [2024-12-06 13:22:55.983777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.973 [2024-12-06 13:22:55.983892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:08.973 [2024-12-06 13:22:55.983931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.921 ms 00:28:08.973 [2024-12-06 13:22:55.983959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.231 [2024-12-06 13:22:55.996145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:09.231 [2024-12-06 13:22:55.999156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.231 [2024-12-06 13:22:55.999213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:09.231 [2024-12-06 13:22:55.999247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.129 ms 00:28:09.231 [2024-12-06 13:22:55.999259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.231 [2024-12-06 13:22:55.999372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.232 [2024-12-06 13:22:55.999393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:09.232 [2024-12-06 13:22:55.999406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:09.232 [2024-12-06 13:22:55.999417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.232 [2024-12-06 13:22:55.999523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.232 [2024-12-06 13:22:55.999548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:09.232 [2024-12-06 13:22:55.999561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:09.232 [2024-12-06 13:22:55.999572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.232 [2024-12-06 13:22:55.999604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.232 [2024-12-06 13:22:55.999620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:09.232 [2024-12-06 13:22:55.999633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:09.232 [2024-12-06 13:22:55.999643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.232 [2024-12-06 13:22:55.999694] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:09.232 [2024-12-06 13:22:55.999720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.232 [2024-12-06 13:22:55.999732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:09.232 [2024-12-06 13:22:55.999744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:09.232 [2024-12-06 13:22:55.999755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.232 [2024-12-06 13:22:56.030544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.232 [2024-12-06 13:22:56.030642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:09.232 [2024-12-06 13:22:56.030676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.763 ms 00:28:09.232 [2024-12-06 13:22:56.030704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.232 [2024-12-06 13:22:56.030789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.232 [2024-12-06 13:22:56.030808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:09.232 [2024-12-06 13:22:56.030822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:09.232 [2024-12-06 13:22:56.030833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.232 [2024-12-06 13:22:56.032419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.006 ms, result 0 00:28:10.169  [2024-12-06T13:22:58.119Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-06T13:22:59.053Z] Copying: 51/1024 [MB] (25 MBps) [2024-12-06T13:23:00.049Z] Copying: 77/1024 [MB] (25 MBps) [2024-12-06T13:23:01.423Z] Copying: 103/1024 [MB] (26 MBps) [2024-12-06T13:23:02.358Z] Copying: 129/1024 [MB] (26 MBps) [2024-12-06T13:23:03.293Z] Copying: 154/1024 [MB] (25 MBps) [2024-12-06T13:23:04.229Z] Copying: 180/1024 [MB] (25 MBps) [2024-12-06T13:23:05.165Z] Copying: 205/1024 [MB] (25 MBps) [2024-12-06T13:23:06.099Z] Copying: 230/1024 [MB] (25 MBps) [2024-12-06T13:23:07.076Z] Copying: 256/1024 [MB] (25 MBps) [2024-12-06T13:23:08.449Z] Copying: 281/1024 [MB] (25 MBps) [2024-12-06T13:23:09.384Z] Copying: 307/1024 [MB] (25 MBps) [2024-12-06T13:23:10.320Z] Copying: 332/1024 [MB] (25 MBps) [2024-12-06T13:23:11.256Z] Copying: 358/1024 [MB] (26 MBps) [2024-12-06T13:23:12.189Z] Copying: 384/1024 [MB] (25 MBps) [2024-12-06T13:23:13.126Z] Copying: 410/1024 [MB] (25 MBps) [2024-12-06T13:23:14.062Z] Copying: 434/1024 [MB] (24 MBps) [2024-12-06T13:23:15.436Z] Copying: 460/1024 [MB] (25 MBps) [2024-12-06T13:23:16.369Z] Copying: 487/1024 [MB] (26 MBps) [2024-12-06T13:23:17.300Z] Copying: 513/1024 [MB] (26 MBps) [2024-12-06T13:23:18.232Z] Copying: 539/1024 [MB] (26 MBps) [2024-12-06T13:23:19.163Z] Copying: 565/1024 [MB] (25 MBps) [2024-12-06T13:23:20.098Z] Copying: 591/1024 [MB] (26 MBps) [2024-12-06T13:23:21.474Z] Copying: 617/1024 [MB] (26 MBps) [2024-12-06T13:23:22.407Z] Copying: 643/1024 [MB] (25 MBps) [2024-12-06T13:23:23.342Z] Copying: 668/1024 [MB] (25 MBps) [2024-12-06T13:23:24.278Z] Copying: 694/1024 [MB] (25 MBps) [2024-12-06T13:23:25.212Z] Copying: 719/1024 [MB] (24 MBps) [2024-12-06T13:23:26.192Z] Copying: 743/1024 [MB] (24 MBps) [2024-12-06T13:23:27.127Z] Copying: 767/1024 [MB] (24 MBps) [2024-12-06T13:23:28.062Z] Copying: 793/1024 [MB] (25 MBps) [2024-12-06T13:23:29.438Z] Copying: 820/1024 [MB] (26 MBps) [2024-12-06T13:23:30.372Z] Copying: 847/1024 [MB] (27 MBps) [2024-12-06T13:23:31.309Z] Copying: 873/1024 [MB] (26 MBps) [2024-12-06T13:23:32.245Z] Copying: 900/1024 [MB] (26 MBps) [2024-12-06T13:23:33.181Z] Copying: 926/1024 [MB] (25 MBps) [2024-12-06T13:23:34.112Z] Copying: 950/1024 [MB] (24 MBps) [2024-12-06T13:23:35.089Z] Copying: 975/1024 [MB] (25 MBps) [2024-12-06T13:23:36.022Z] Copying: 999/1024 [MB] (24 MBps) [2024-12-06T13:23:36.022Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 13:23:35.996224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.006 [2024-12-06 13:23:35.996285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:49.006 [2024-12-06 13:23:35.996306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:49.006 [2024-12-06 13:23:35.996319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.006 [2024-12-06 13:23:35.996348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:49.006 [2024-12-06 13:23:35.999944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.006 [2024-12-06 13:23:35.999979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:49.006 [2024-12-06 13:23:36.000018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:28:49.006 [2024-12-06 13:23:36.000029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.006 [2024-12-06 13:23:36.001547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.006 [2024-12-06 13:23:36.001587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:49.006 [2024-12-06 13:23:36.001603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:28:49.006 [2024-12-06 13:23:36.001615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.006 [2024-12-06 13:23:36.017723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.006 [2024-12-06 13:23:36.017766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:49.006 [2024-12-06 13:23:36.017799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.086 ms 00:28:49.006 [2024-12-06 13:23:36.017810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.264 [2024-12-06 13:23:36.024375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.264 [2024-12-06 13:23:36.024409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:49.264 [2024-12-06 13:23:36.024439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.448 ms 00:28:49.264 [2024-12-06 13:23:36.024450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.264 [2024-12-06 13:23:36.053716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.264 [2024-12-06 13:23:36.053756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:49.264 [2024-12-06 13:23:36.053788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.172 ms 00:28:49.264 [2024-12-06 13:23:36.053798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.264 [2024-12-06 13:23:36.070212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.265 [2024-12-06 13:23:36.070252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:49.265 [2024-12-06 13:23:36.070308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.285 ms 00:28:49.265 [2024-12-06 13:23:36.070319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.265 [2024-12-06 13:23:36.070542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.265 [2024-12-06 13:23:36.070577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:49.265 [2024-12-06 13:23:36.070592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:49.265 [2024-12-06 13:23:36.070603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.265 [2024-12-06 13:23:36.099065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.265 [2024-12-06 13:23:36.099105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:49.265 [2024-12-06 13:23:36.099138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.442 ms 00:28:49.265 [2024-12-06 13:23:36.099165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.265 [2024-12-06 13:23:36.126952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.265 [2024-12-06 13:23:36.126992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:49.265 [2024-12-06 13:23:36.127024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.671 ms 00:28:49.265 [2024-12-06 13:23:36.127034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.265 [2024-12-06 13:23:36.154327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.265 [2024-12-06 13:23:36.154369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:49.265 [2024-12-06 13:23:36.154384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.160 ms 00:28:49.265 [2024-12-06 13:23:36.154395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.265 [2024-12-06 13:23:36.181591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.265 [2024-12-06 13:23:36.181630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:49.265 [2024-12-06 13:23:36.181661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.025 ms 00:28:49.265 [2024-12-06 13:23:36.181671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.265 [2024-12-06 13:23:36.181793] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:49.265 [2024-12-06 13:23:36.181818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.181990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:49.265 [2024-12-06 13:23:36.182464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.182993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.183004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:49.266 [2024-12-06 13:23:36.183024] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:49.266 [2024-12-06 13:23:36.183041] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25e71a3f-895e-4526-801e-79081fb50ab9 00:28:49.266 [2024-12-06 13:23:36.183053] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:49.266 [2024-12-06 13:23:36.183063] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:49.266 [2024-12-06 13:23:36.183074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:49.266 [2024-12-06 13:23:36.183085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:49.266 [2024-12-06 13:23:36.183095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:49.266 [2024-12-06 13:23:36.183118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:49.266 [2024-12-06 13:23:36.183141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:49.266 [2024-12-06 13:23:36.183154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:49.266 [2024-12-06 13:23:36.183164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:49.266 [2024-12-06 13:23:36.183174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.266 [2024-12-06 13:23:36.183189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:49.266 [2024-12-06 13:23:36.183201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:28:49.266 [2024-12-06 13:23:36.183213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.266 [2024-12-06 13:23:36.199054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.266 [2024-12-06 13:23:36.199091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:49.266 [2024-12-06 13:23:36.199122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.792 ms 00:28:49.266 [2024-12-06 13:23:36.199133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.266 [2024-12-06 13:23:36.199649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.266 [2024-12-06 13:23:36.199679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:49.266 [2024-12-06 13:23:36.199708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:28:49.266 [2024-12-06 13:23:36.199726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.266 [2024-12-06 13:23:36.240908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.266 [2024-12-06 13:23:36.240952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.266 [2024-12-06 13:23:36.240984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.266 [2024-12-06 13:23:36.240994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.266 [2024-12-06 13:23:36.241067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.266 [2024-12-06 13:23:36.241114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.266 [2024-12-06 13:23:36.241126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.266 [2024-12-06 13:23:36.241148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.266 [2024-12-06 13:23:36.241261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.267 [2024-12-06 13:23:36.241290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.267 [2024-12-06 13:23:36.241304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.267 [2024-12-06 13:23:36.241314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.267 [2024-12-06 13:23:36.241338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.267 [2024-12-06 13:23:36.241356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.267 [2024-12-06 13:23:36.241369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.267 [2024-12-06 13:23:36.241380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.335332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.335391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.525 [2024-12-06 13:23:36.335426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.335438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.412477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.412532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:49.525 [2024-12-06 13:23:36.412566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.412584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.412687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.412720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:49.525 [2024-12-06 13:23:36.412732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.412746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.412816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.412832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:49.525 [2024-12-06 13:23:36.412845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.412856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.413050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.413081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:49.525 [2024-12-06 13:23:36.413096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.413107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.413176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.413195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:49.525 [2024-12-06 13:23:36.413208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.413219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.413268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.413300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:49.525 [2024-12-06 13:23:36.413314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.413325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.413379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.525 [2024-12-06 13:23:36.413405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:49.525 [2024-12-06 13:23:36.413418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.525 [2024-12-06 13:23:36.413430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.525 [2024-12-06 13:23:36.413580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 417.315 ms, result 0 00:28:50.460 00:28:50.460 00:28:50.460 13:23:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:50.460 [2024-12-06 13:23:37.466456] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:28:50.460 [2024-12-06 13:23:37.466937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79827 ] 00:28:50.718 [2024-12-06 13:23:37.649353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.976 [2024-12-06 13:23:37.769312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.234 [2024-12-06 13:23:38.088272] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:51.234 [2024-12-06 13:23:38.088360] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:51.493 [2024-12-06 13:23:38.250705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.250786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:51.494 [2024-12-06 13:23:38.250806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:51.494 [2024-12-06 13:23:38.250819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.250911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.250941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:51.494 [2024-12-06 13:23:38.250954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:51.494 [2024-12-06 13:23:38.250965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.251003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:51.494 [2024-12-06 13:23:38.251926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:51.494 [2024-12-06 13:23:38.251957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.251970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:51.494 [2024-12-06 13:23:38.251982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:28:51.494 [2024-12-06 13:23:38.251992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.254089] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:51.494 [2024-12-06 13:23:38.271143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.271219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:51.494 [2024-12-06 13:23:38.271270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.056 ms 00:28:51.494 [2024-12-06 13:23:38.271292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.271387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.271407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:51.494 [2024-12-06 13:23:38.271420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:51.494 [2024-12-06 13:23:38.271433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.280695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.280762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:51.494 [2024-12-06 13:23:38.280795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.163 ms 00:28:51.494 [2024-12-06 13:23:38.280813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.280931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.280950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:51.494 [2024-12-06 13:23:38.280963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:51.494 [2024-12-06 13:23:38.280974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.281064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.281082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:51.494 [2024-12-06 13:23:38.281094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:51.494 [2024-12-06 13:23:38.281106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.281150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:51.494 [2024-12-06 13:23:38.286306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.286348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:51.494 [2024-12-06 13:23:38.286369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.166 ms 00:28:51.494 [2024-12-06 13:23:38.286381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.286424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.286440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:51.494 [2024-12-06 13:23:38.286452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:51.494 [2024-12-06 13:23:38.286464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.286534] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:51.494 [2024-12-06 13:23:38.286568] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:51.494 [2024-12-06 13:23:38.286611] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:51.494 [2024-12-06 13:23:38.286636] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:51.494 [2024-12-06 13:23:38.286759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:51.494 [2024-12-06 13:23:38.286775] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:51.494 [2024-12-06 13:23:38.286790] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:51.494 [2024-12-06 13:23:38.286804] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:51.494 [2024-12-06 13:23:38.286817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:51.494 [2024-12-06 13:23:38.286829] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:51.494 [2024-12-06 13:23:38.286841] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:51.494 [2024-12-06 13:23:38.286856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:51.494 [2024-12-06 13:23:38.286867] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:51.494 [2024-12-06 13:23:38.286879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.286891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:51.494 [2024-12-06 13:23:38.286903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:28:51.494 [2024-12-06 13:23:38.286914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.287009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.494 [2024-12-06 13:23:38.287024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:51.494 [2024-12-06 13:23:38.287035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:51.494 [2024-12-06 13:23:38.287046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.494 [2024-12-06 13:23:38.287200] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:51.494 [2024-12-06 13:23:38.287223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:51.494 [2024-12-06 13:23:38.287236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:51.494 [2024-12-06 13:23:38.287248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:51.494 [2024-12-06 13:23:38.287276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:51.494 [2024-12-06 13:23:38.287298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:51.494 [2024-12-06 13:23:38.287309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:51.494 [2024-12-06 13:23:38.287331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:51.494 [2024-12-06 13:23:38.287341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:51.494 [2024-12-06 13:23:38.287350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:51.494 [2024-12-06 13:23:38.287374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:51.494 [2024-12-06 13:23:38.287388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:51.494 [2024-12-06 13:23:38.287399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:51.494 [2024-12-06 13:23:38.287421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:51.494 [2024-12-06 13:23:38.287431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:51.494 [2024-12-06 13:23:38.287452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.494 [2024-12-06 13:23:38.287475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:51.494 [2024-12-06 13:23:38.287485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.494 [2024-12-06 13:23:38.287505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:51.494 [2024-12-06 13:23:38.287516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.494 [2024-12-06 13:23:38.287537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:51.494 [2024-12-06 13:23:38.287548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.494 [2024-12-06 13:23:38.287569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:51.494 [2024-12-06 13:23:38.287580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:51.494 [2024-12-06 13:23:38.287600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:51.494 [2024-12-06 13:23:38.287611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:51.494 [2024-12-06 13:23:38.287622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:51.494 [2024-12-06 13:23:38.287633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:51.494 [2024-12-06 13:23:38.287644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:51.494 [2024-12-06 13:23:38.287654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.494 [2024-12-06 13:23:38.287665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:51.495 [2024-12-06 13:23:38.287676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:51.495 [2024-12-06 13:23:38.287686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.495 [2024-12-06 13:23:38.287697] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:51.495 [2024-12-06 13:23:38.287709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:51.495 [2024-12-06 13:23:38.287721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:51.495 [2024-12-06 13:23:38.287733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.495 [2024-12-06 13:23:38.287746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:51.495 [2024-12-06 13:23:38.287757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:51.495 [2024-12-06 13:23:38.287768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:51.495 [2024-12-06 13:23:38.287779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:51.495 [2024-12-06 13:23:38.287789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:51.495 [2024-12-06 13:23:38.287800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:51.495 [2024-12-06 13:23:38.287813] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:51.495 [2024-12-06 13:23:38.287827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:51.495 [2024-12-06 13:23:38.287846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:51.495 [2024-12-06 13:23:38.287858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:51.495 [2024-12-06 13:23:38.287870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:51.495 [2024-12-06 13:23:38.287881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:51.495 [2024-12-06 13:23:38.287892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:51.495 [2024-12-06 13:23:38.287903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:51.495 [2024-12-06 13:23:38.287929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:51.495 [2024-12-06 13:23:38.287940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:51.495 [2024-12-06 13:23:38.287951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:51.495 [2024-12-06 13:23:38.287962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:51.495 [2024-12-06 13:23:38.287974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:51.495 [2024-12-06 13:23:38.287984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:51.495 [2024-12-06 13:23:38.287995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:51.495 [2024-12-06 13:23:38.288006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:51.495 [2024-12-06 13:23:38.288017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:51.495 [2024-12-06 13:23:38.288030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:51.495 [2024-12-06 13:23:38.288042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:51.495 [2024-12-06 13:23:38.288053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:51.495 [2024-12-06 13:23:38.288064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:51.495 [2024-12-06 13:23:38.288076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:51.495 [2024-12-06 13:23:38.288088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.288099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:51.495 [2024-12-06 13:23:38.288111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:28:51.495 [2024-12-06 13:23:38.288123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.327177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.327305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:51.495 [2024-12-06 13:23:38.327326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.950 ms 00:28:51.495 [2024-12-06 13:23:38.327344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.327464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.327480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:51.495 [2024-12-06 13:23:38.327493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:51.495 [2024-12-06 13:23:38.327504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.384442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.384539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:51.495 [2024-12-06 13:23:38.384564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.826 ms 00:28:51.495 [2024-12-06 13:23:38.384577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.384650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.384667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:51.495 [2024-12-06 13:23:38.384687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:51.495 [2024-12-06 13:23:38.384699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.385369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.385389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:51.495 [2024-12-06 13:23:38.385404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:28:51.495 [2024-12-06 13:23:38.385416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.385596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.385616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:51.495 [2024-12-06 13:23:38.385634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:28:51.495 [2024-12-06 13:23:38.385645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.405372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.405451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:51.495 [2024-12-06 13:23:38.405470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.698 ms 00:28:51.495 [2024-12-06 13:23:38.405483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.421979] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:51.495 [2024-12-06 13:23:38.422046] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:51.495 [2024-12-06 13:23:38.422066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.422079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:51.495 [2024-12-06 13:23:38.422093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.429 ms 00:28:51.495 [2024-12-06 13:23:38.422104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.451252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.451314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:51.495 [2024-12-06 13:23:38.451332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.090 ms 00:28:51.495 [2024-12-06 13:23:38.451344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.466578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.466632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:51.495 [2024-12-06 13:23:38.466649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.170 ms 00:28:51.495 [2024-12-06 13:23:38.466661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.481700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.481742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:51.495 [2024-12-06 13:23:38.481758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.994 ms 00:28:51.495 [2024-12-06 13:23:38.481769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.495 [2024-12-06 13:23:38.482676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.495 [2024-12-06 13:23:38.482715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:51.495 [2024-12-06 13:23:38.482736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:28:51.495 [2024-12-06 13:23:38.482755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.557895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.557975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:51.815 [2024-12-06 13:23:38.558019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.114 ms 00:28:51.815 [2024-12-06 13:23:38.558032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.569595] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:51.815 [2024-12-06 13:23:38.572439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.572491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:51.815 [2024-12-06 13:23:38.572524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.341 ms 00:28:51.815 [2024-12-06 13:23:38.572537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.572644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.572664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:51.815 [2024-12-06 13:23:38.572682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:51.815 [2024-12-06 13:23:38.572694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.572808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.572826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:51.815 [2024-12-06 13:23:38.572840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:51.815 [2024-12-06 13:23:38.572852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.572884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.572898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:51.815 [2024-12-06 13:23:38.572910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:51.815 [2024-12-06 13:23:38.572921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.572970] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:51.815 [2024-12-06 13:23:38.572988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.572999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:51.815 [2024-12-06 13:23:38.573012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:51.815 [2024-12-06 13:23:38.573023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.602239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.602318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:51.815 [2024-12-06 13:23:38.602358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.189 ms 00:28:51.815 [2024-12-06 13:23:38.602370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.602452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.815 [2024-12-06 13:23:38.602470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:51.815 [2024-12-06 13:23:38.602483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:51.815 [2024-12-06 13:23:38.602494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.815 [2024-12-06 13:23:38.603895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.645 ms, result 0 00:28:53.191  [2024-12-06T13:23:41.141Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T13:23:42.078Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-06T13:23:43.014Z] Copying: 78/1024 [MB] (25 MBps) [2024-12-06T13:23:43.951Z] Copying: 103/1024 [MB] (24 MBps) [2024-12-06T13:23:44.887Z] Copying: 129/1024 [MB] (26 MBps) [2024-12-06T13:23:45.818Z] Copying: 155/1024 [MB] (26 MBps) [2024-12-06T13:23:47.236Z] Copying: 182/1024 [MB] (26 MBps) [2024-12-06T13:23:48.170Z] Copying: 207/1024 [MB] (25 MBps) [2024-12-06T13:23:49.104Z] Copying: 232/1024 [MB] (24 MBps) [2024-12-06T13:23:50.040Z] Copying: 258/1024 [MB] (25 MBps) [2024-12-06T13:23:50.971Z] Copying: 281/1024 [MB] (23 MBps) [2024-12-06T13:23:51.904Z] Copying: 305/1024 [MB] (23 MBps) [2024-12-06T13:23:52.835Z] Copying: 329/1024 [MB] (24 MBps) [2024-12-06T13:23:54.213Z] Copying: 355/1024 [MB] (25 MBps) [2024-12-06T13:23:55.149Z] Copying: 381/1024 [MB] (26 MBps) [2024-12-06T13:23:56.082Z] Copying: 407/1024 [MB] (26 MBps) [2024-12-06T13:23:57.018Z] Copying: 433/1024 [MB] (25 MBps) [2024-12-06T13:23:57.954Z] Copying: 460/1024 [MB] (26 MBps) [2024-12-06T13:23:58.888Z] Copying: 487/1024 [MB] (27 MBps) [2024-12-06T13:23:59.824Z] Copying: 511/1024 [MB] (24 MBps) [2024-12-06T13:24:01.201Z] Copying: 538/1024 [MB] (26 MBps) [2024-12-06T13:24:02.136Z] Copying: 564/1024 [MB] (26 MBps) [2024-12-06T13:24:03.070Z] Copying: 590/1024 [MB] (25 MBps) [2024-12-06T13:24:04.006Z] Copying: 615/1024 [MB] (25 MBps) [2024-12-06T13:24:04.941Z] Copying: 639/1024 [MB] (23 MBps) [2024-12-06T13:24:05.874Z] Copying: 662/1024 [MB] (22 MBps) [2024-12-06T13:24:06.807Z] Copying: 687/1024 [MB] (25 MBps) [2024-12-06T13:24:08.211Z] Copying: 712/1024 [MB] (24 MBps) [2024-12-06T13:24:09.145Z] Copying: 735/1024 [MB] (23 MBps) [2024-12-06T13:24:10.083Z] Copying: 759/1024 [MB] (23 MBps) [2024-12-06T13:24:11.015Z] Copying: 783/1024 [MB] (24 MBps) [2024-12-06T13:24:11.948Z] Copying: 809/1024 [MB] (25 MBps) [2024-12-06T13:24:12.883Z] Copying: 836/1024 [MB] (26 MBps) [2024-12-06T13:24:13.819Z] Copying: 861/1024 [MB] (25 MBps) [2024-12-06T13:24:15.194Z] Copying: 886/1024 [MB] (24 MBps) [2024-12-06T13:24:16.127Z] Copying: 911/1024 [MB] (25 MBps) [2024-12-06T13:24:17.062Z] Copying: 937/1024 [MB] (25 MBps) [2024-12-06T13:24:17.995Z] Copying: 963/1024 [MB] (25 MBps) [2024-12-06T13:24:18.928Z] Copying: 988/1024 [MB] (25 MBps) [2024-12-06T13:24:19.493Z] Copying: 1013/1024 [MB] (25 MBps) [2024-12-06T13:24:19.493Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 13:24:19.367819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.367924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:32.477 [2024-12-06 13:24:19.367952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:32.477 [2024-12-06 13:24:19.367969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.368016] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:32.477 [2024-12-06 13:24:19.373788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.373851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:32.477 [2024-12-06 13:24:19.373873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.726 ms 00:29:32.477 [2024-12-06 13:24:19.373900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.374289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.374331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:32.477 [2024-12-06 13:24:19.374352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:29:32.477 [2024-12-06 13:24:19.374367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.379367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.379401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:32.477 [2024-12-06 13:24:19.379432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.973 ms 00:29:32.477 [2024-12-06 13:24:19.379451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.386683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.386733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:32.477 [2024-12-06 13:24:19.386778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.207 ms 00:29:32.477 [2024-12-06 13:24:19.386789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.421615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.421680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:32.477 [2024-12-06 13:24:19.421714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.753 ms 00:29:32.477 [2024-12-06 13:24:19.421727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.440684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.440764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:32.477 [2024-12-06 13:24:19.440798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.911 ms 00:29:32.477 [2024-12-06 13:24:19.440815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.440977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.441014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:32.477 [2024-12-06 13:24:19.441028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:32.477 [2024-12-06 13:24:19.441040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.477 [2024-12-06 13:24:19.474981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.477 [2024-12-06 13:24:19.475042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:32.477 [2024-12-06 13:24:19.475075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.920 ms 00:29:32.477 [2024-12-06 13:24:19.475087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.737 [2024-12-06 13:24:19.507045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.737 [2024-12-06 13:24:19.507116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:32.737 [2024-12-06 13:24:19.507171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.886 ms 00:29:32.737 [2024-12-06 13:24:19.507201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.737 [2024-12-06 13:24:19.540516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.737 [2024-12-06 13:24:19.540579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:32.737 [2024-12-06 13:24:19.540596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.253 ms 00:29:32.737 [2024-12-06 13:24:19.540608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.737 [2024-12-06 13:24:19.573405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.737 [2024-12-06 13:24:19.573453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:32.737 [2024-12-06 13:24:19.573471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.703 ms 00:29:32.737 [2024-12-06 13:24:19.573483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.737 [2024-12-06 13:24:19.573529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:32.737 [2024-12-06 13:24:19.573562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.573997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:32.737 [2024-12-06 13:24:19.574303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:32.738 [2024-12-06 13:24:19.574833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:32.738 [2024-12-06 13:24:19.574845] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25e71a3f-895e-4526-801e-79081fb50ab9 00:29:32.738 [2024-12-06 13:24:19.574858] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:32.738 [2024-12-06 13:24:19.574869] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:32.738 [2024-12-06 13:24:19.574880] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:32.738 [2024-12-06 13:24:19.574892] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:32.738 [2024-12-06 13:24:19.574917] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:32.738 [2024-12-06 13:24:19.574930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:32.738 [2024-12-06 13:24:19.574942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:32.738 [2024-12-06 13:24:19.574952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:32.738 [2024-12-06 13:24:19.574962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:32.738 [2024-12-06 13:24:19.574974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.738 [2024-12-06 13:24:19.574986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:32.738 [2024-12-06 13:24:19.574999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:29:32.738 [2024-12-06 13:24:19.575016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.738 [2024-12-06 13:24:19.592403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.738 [2024-12-06 13:24:19.592451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:32.738 [2024-12-06 13:24:19.592470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.340 ms 00:29:32.738 [2024-12-06 13:24:19.592482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.738 [2024-12-06 13:24:19.592953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.738 [2024-12-06 13:24:19.592983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:32.738 [2024-12-06 13:24:19.593007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:29:32.738 [2024-12-06 13:24:19.593020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.738 [2024-12-06 13:24:19.638221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.738 [2024-12-06 13:24:19.638284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:32.738 [2024-12-06 13:24:19.638316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.738 [2024-12-06 13:24:19.638329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.738 [2024-12-06 13:24:19.638415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.738 [2024-12-06 13:24:19.638432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:32.738 [2024-12-06 13:24:19.638452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.738 [2024-12-06 13:24:19.638463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.738 [2024-12-06 13:24:19.638559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.738 [2024-12-06 13:24:19.638581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:32.738 [2024-12-06 13:24:19.638594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.738 [2024-12-06 13:24:19.638606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.738 [2024-12-06 13:24:19.638630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.738 [2024-12-06 13:24:19.638644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:32.738 [2024-12-06 13:24:19.638657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.738 [2024-12-06 13:24:19.638676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.753803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.997 [2024-12-06 13:24:19.753893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:32.997 [2024-12-06 13:24:19.753914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.997 [2024-12-06 13:24:19.753927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.848496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.997 [2024-12-06 13:24:19.848619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:32.997 [2024-12-06 13:24:19.848663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.997 [2024-12-06 13:24:19.848675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.848787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.997 [2024-12-06 13:24:19.848805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:32.997 [2024-12-06 13:24:19.848818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.997 [2024-12-06 13:24:19.848830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.848882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.997 [2024-12-06 13:24:19.848898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:32.997 [2024-12-06 13:24:19.848911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.997 [2024-12-06 13:24:19.848923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.849058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.997 [2024-12-06 13:24:19.849079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:32.997 [2024-12-06 13:24:19.849092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.997 [2024-12-06 13:24:19.849104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.849194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.997 [2024-12-06 13:24:19.849227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:32.997 [2024-12-06 13:24:19.849242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.997 [2024-12-06 13:24:19.849254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.849309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.997 [2024-12-06 13:24:19.849326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:32.997 [2024-12-06 13:24:19.849339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.997 [2024-12-06 13:24:19.849350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.997 [2024-12-06 13:24:19.849405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.998 [2024-12-06 13:24:19.849422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:32.998 [2024-12-06 13:24:19.849435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.998 [2024-12-06 13:24:19.849447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.998 [2024-12-06 13:24:19.849603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 481.827 ms, result 0 00:29:33.987 00:29:33.987 00:29:33.987 13:24:20 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:36.519 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:36.519 13:24:22 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:36.519 [2024-12-06 13:24:23.046776] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:29:36.519 [2024-12-06 13:24:23.047028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80277 ] 00:29:36.519 [2024-12-06 13:24:23.231560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.519 [2024-12-06 13:24:23.393366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.777 [2024-12-06 13:24:23.754524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:36.777 [2024-12-06 13:24:23.754700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:37.037 [2024-12-06 13:24:23.919335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.919430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:37.037 [2024-12-06 13:24:23.919466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:37.037 [2024-12-06 13:24:23.919478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.919557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.919578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:37.037 [2024-12-06 13:24:23.919591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:37.037 [2024-12-06 13:24:23.919602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.919630] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:37.037 [2024-12-06 13:24:23.920597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:37.037 [2024-12-06 13:24:23.920655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.920671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:37.037 [2024-12-06 13:24:23.920699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:29:37.037 [2024-12-06 13:24:23.920711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.923039] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:37.037 [2024-12-06 13:24:23.940113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.940174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:37.037 [2024-12-06 13:24:23.940223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.075 ms 00:29:37.037 [2024-12-06 13:24:23.940234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.940309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.940328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:37.037 [2024-12-06 13:24:23.940340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:37.037 [2024-12-06 13:24:23.940360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.950588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.950662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:37.037 [2024-12-06 13:24:23.950695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.093 ms 00:29:37.037 [2024-12-06 13:24:23.950713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.950847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.950880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:37.037 [2024-12-06 13:24:23.950915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:29:37.037 [2024-12-06 13:24:23.950942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.951063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.951081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:37.037 [2024-12-06 13:24:23.951103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:37.037 [2024-12-06 13:24:23.951114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.951171] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:37.037 [2024-12-06 13:24:23.956318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.956367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:37.037 [2024-12-06 13:24:23.956403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:29:37.037 [2024-12-06 13:24:23.956414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.956454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.956469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:37.037 [2024-12-06 13:24:23.956480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:37.037 [2024-12-06 13:24:23.956490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.037 [2024-12-06 13:24:23.956602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:37.037 [2024-12-06 13:24:23.956636] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:37.037 [2024-12-06 13:24:23.956679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:37.037 [2024-12-06 13:24:23.956705] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:37.037 [2024-12-06 13:24:23.956812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:37.037 [2024-12-06 13:24:23.956838] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:37.037 [2024-12-06 13:24:23.956854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:37.037 [2024-12-06 13:24:23.956869] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:37.037 [2024-12-06 13:24:23.956883] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:37.037 [2024-12-06 13:24:23.956896] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:37.037 [2024-12-06 13:24:23.956922] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:37.037 [2024-12-06 13:24:23.956937] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:37.037 [2024-12-06 13:24:23.956948] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:37.037 [2024-12-06 13:24:23.956959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.037 [2024-12-06 13:24:23.956971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:37.037 [2024-12-06 13:24:23.956982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:29:37.038 [2024-12-06 13:24:23.956993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.038 [2024-12-06 13:24:23.957086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.038 [2024-12-06 13:24:23.957101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:37.038 [2024-12-06 13:24:23.957113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:37.038 [2024-12-06 13:24:23.957136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.038 [2024-12-06 13:24:23.957272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:37.038 [2024-12-06 13:24:23.957301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:37.038 [2024-12-06 13:24:23.957314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:37.038 [2024-12-06 13:24:23.957347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:37.038 [2024-12-06 13:24:23.957378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:37.038 [2024-12-06 13:24:23.957398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:37.038 [2024-12-06 13:24:23.957408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:37.038 [2024-12-06 13:24:23.957418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:37.038 [2024-12-06 13:24:23.957441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:37.038 [2024-12-06 13:24:23.957467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:37.038 [2024-12-06 13:24:23.957480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:37.038 [2024-12-06 13:24:23.957504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:37.038 [2024-12-06 13:24:23.957536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:37.038 [2024-12-06 13:24:23.957568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:37.038 [2024-12-06 13:24:23.957599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:37.038 [2024-12-06 13:24:23.957631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:37.038 [2024-12-06 13:24:23.957663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:37.038 [2024-12-06 13:24:23.957712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:37.038 [2024-12-06 13:24:23.957723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:37.038 [2024-12-06 13:24:23.957733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:37.038 [2024-12-06 13:24:23.957744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:37.038 [2024-12-06 13:24:23.957755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:37.038 [2024-12-06 13:24:23.957765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:37.038 [2024-12-06 13:24:23.957786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:37.038 [2024-12-06 13:24:23.957796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957807] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:37.038 [2024-12-06 13:24:23.957818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:37.038 [2024-12-06 13:24:23.957840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:37.038 [2024-12-06 13:24:23.957864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:37.038 [2024-12-06 13:24:23.957875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:37.038 [2024-12-06 13:24:23.957901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:37.038 [2024-12-06 13:24:23.957911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:37.038 [2024-12-06 13:24:23.957921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:37.038 [2024-12-06 13:24:23.957932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:37.038 [2024-12-06 13:24:23.957945] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:37.038 [2024-12-06 13:24:23.957959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:37.038 [2024-12-06 13:24:23.957976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:37.038 [2024-12-06 13:24:23.957988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:37.038 [2024-12-06 13:24:23.957999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:37.038 [2024-12-06 13:24:23.958010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:37.038 [2024-12-06 13:24:23.958021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:37.038 [2024-12-06 13:24:23.958032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:37.038 [2024-12-06 13:24:23.958043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:37.038 [2024-12-06 13:24:23.958054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:37.038 [2024-12-06 13:24:23.958065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:37.038 [2024-12-06 13:24:23.958077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:37.038 [2024-12-06 13:24:23.958089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:37.038 [2024-12-06 13:24:23.958100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:37.038 [2024-12-06 13:24:23.958111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:37.038 [2024-12-06 13:24:23.958135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:37.038 [2024-12-06 13:24:23.958146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:37.038 [2024-12-06 13:24:23.958187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:37.038 [2024-12-06 13:24:23.958202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:37.038 [2024-12-06 13:24:23.958214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:37.038 [2024-12-06 13:24:23.958226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:37.038 [2024-12-06 13:24:23.958238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:37.038 [2024-12-06 13:24:23.958251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.038 [2024-12-06 13:24:23.958300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:37.038 [2024-12-06 13:24:23.958314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:29:37.038 [2024-12-06 13:24:23.958326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.038 [2024-12-06 13:24:23.998720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.038 [2024-12-06 13:24:23.998785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:37.038 [2024-12-06 13:24:23.998821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.322 ms 00:29:37.038 [2024-12-06 13:24:23.998853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.038 [2024-12-06 13:24:23.998993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.038 [2024-12-06 13:24:23.999021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:37.038 [2024-12-06 13:24:23.999033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:37.038 [2024-12-06 13:24:23.999048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.051129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.051208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:37.298 [2024-12-06 13:24:24.051243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.925 ms 00:29:37.298 [2024-12-06 13:24:24.051255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.051327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.051343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:37.298 [2024-12-06 13:24:24.051362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:37.298 [2024-12-06 13:24:24.051373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.052129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.052217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:37.298 [2024-12-06 13:24:24.052248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:29:37.298 [2024-12-06 13:24:24.052259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.052473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.052502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:37.298 [2024-12-06 13:24:24.052541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:29:37.298 [2024-12-06 13:24:24.052562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.072477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.072558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:37.298 [2024-12-06 13:24:24.072598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.870 ms 00:29:37.298 [2024-12-06 13:24:24.072610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.088767] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:37.298 [2024-12-06 13:24:24.088812] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:37.298 [2024-12-06 13:24:24.088861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.088874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:37.298 [2024-12-06 13:24:24.088902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.029 ms 00:29:37.298 [2024-12-06 13:24:24.088912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.115949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.115993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:37.298 [2024-12-06 13:24:24.116024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.991 ms 00:29:37.298 [2024-12-06 13:24:24.116036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.130328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.130371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:37.298 [2024-12-06 13:24:24.130404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.233 ms 00:29:37.298 [2024-12-06 13:24:24.130417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.144376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.144419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:37.298 [2024-12-06 13:24:24.144451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.912 ms 00:29:37.298 [2024-12-06 13:24:24.144461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.298 [2024-12-06 13:24:24.145386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.298 [2024-12-06 13:24:24.145421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:37.299 [2024-12-06 13:24:24.145473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:29:37.299 [2024-12-06 13:24:24.145485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.218128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.218225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:37.299 [2024-12-06 13:24:24.218310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.611 ms 00:29:37.299 [2024-12-06 13:24:24.218323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.231159] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:37.299 [2024-12-06 13:24:24.234419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.234458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:37.299 [2024-12-06 13:24:24.234492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.909 ms 00:29:37.299 [2024-12-06 13:24:24.234505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.234654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.234676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:37.299 [2024-12-06 13:24:24.234694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:37.299 [2024-12-06 13:24:24.234707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.234817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.234861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:37.299 [2024-12-06 13:24:24.234875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:37.299 [2024-12-06 13:24:24.234887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.234920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.234937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:37.299 [2024-12-06 13:24:24.234949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:37.299 [2024-12-06 13:24:24.234961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.235010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:37.299 [2024-12-06 13:24:24.235027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.235039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:37.299 [2024-12-06 13:24:24.235051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:37.299 [2024-12-06 13:24:24.235063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.266300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.266367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:37.299 [2024-12-06 13:24:24.266392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.210 ms 00:29:37.299 [2024-12-06 13:24:24.266406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.266492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.299 [2024-12-06 13:24:24.266512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:37.299 [2024-12-06 13:24:24.266525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:37.299 [2024-12-06 13:24:24.266538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.299 [2024-12-06 13:24:24.268025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 348.140 ms, result 0 00:29:38.673  [2024-12-06T13:24:26.622Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-06T13:24:27.555Z] Copying: 43/1024 [MB] (22 MBps) [2024-12-06T13:24:28.552Z] Copying: 65/1024 [MB] (21 MBps) [2024-12-06T13:24:29.484Z] Copying: 87/1024 [MB] (22 MBps) [2024-12-06T13:24:30.418Z] Copying: 110/1024 [MB] (22 MBps) [2024-12-06T13:24:31.352Z] Copying: 134/1024 [MB] (23 MBps) [2024-12-06T13:24:32.302Z] Copying: 158/1024 [MB] (24 MBps) [2024-12-06T13:24:33.692Z] Copying: 182/1024 [MB] (24 MBps) [2024-12-06T13:24:34.626Z] Copying: 206/1024 [MB] (23 MBps) [2024-12-06T13:24:35.567Z] Copying: 230/1024 [MB] (24 MBps) [2024-12-06T13:24:36.498Z] Copying: 255/1024 [MB] (24 MBps) [2024-12-06T13:24:37.429Z] Copying: 280/1024 [MB] (24 MBps) [2024-12-06T13:24:38.361Z] Copying: 304/1024 [MB] (24 MBps) [2024-12-06T13:24:39.294Z] Copying: 329/1024 [MB] (24 MBps) [2024-12-06T13:24:40.674Z] Copying: 353/1024 [MB] (23 MBps) [2024-12-06T13:24:41.624Z] Copying: 375/1024 [MB] (22 MBps) [2024-12-06T13:24:42.556Z] Copying: 398/1024 [MB] (22 MBps) [2024-12-06T13:24:43.491Z] Copying: 421/1024 [MB] (22 MBps) [2024-12-06T13:24:44.426Z] Copying: 443/1024 [MB] (22 MBps) [2024-12-06T13:24:45.359Z] Copying: 466/1024 [MB] (22 MBps) [2024-12-06T13:24:46.292Z] Copying: 489/1024 [MB] (23 MBps) [2024-12-06T13:24:47.664Z] Copying: 512/1024 [MB] (22 MBps) [2024-12-06T13:24:48.597Z] Copying: 535/1024 [MB] (22 MBps) [2024-12-06T13:24:49.532Z] Copying: 557/1024 [MB] (22 MBps) [2024-12-06T13:24:50.467Z] Copying: 580/1024 [MB] (23 MBps) [2024-12-06T13:24:51.402Z] Copying: 604/1024 [MB] (24 MBps) [2024-12-06T13:24:52.337Z] Copying: 630/1024 [MB] (25 MBps) [2024-12-06T13:24:53.713Z] Copying: 655/1024 [MB] (25 MBps) [2024-12-06T13:24:54.650Z] Copying: 679/1024 [MB] (24 MBps) [2024-12-06T13:24:55.583Z] Copying: 702/1024 [MB] (22 MBps) [2024-12-06T13:24:56.517Z] Copying: 726/1024 [MB] (23 MBps) [2024-12-06T13:24:57.454Z] Copying: 749/1024 [MB] (23 MBps) [2024-12-06T13:24:58.390Z] Copying: 772/1024 [MB] (22 MBps) [2024-12-06T13:24:59.343Z] Copying: 796/1024 [MB] (24 MBps) [2024-12-06T13:25:00.283Z] Copying: 820/1024 [MB] (23 MBps) [2024-12-06T13:25:01.660Z] Copying: 844/1024 [MB] (23 MBps) [2024-12-06T13:25:02.596Z] Copying: 867/1024 [MB] (23 MBps) [2024-12-06T13:25:03.533Z] Copying: 891/1024 [MB] (23 MBps) [2024-12-06T13:25:04.472Z] Copying: 914/1024 [MB] (23 MBps) [2024-12-06T13:25:05.407Z] Copying: 938/1024 [MB] (23 MBps) [2024-12-06T13:25:06.340Z] Copying: 961/1024 [MB] (23 MBps) [2024-12-06T13:25:07.716Z] Copying: 984/1024 [MB] (22 MBps) [2024-12-06T13:25:08.650Z] Copying: 1008/1024 [MB] (24 MBps) [2024-12-06T13:25:09.216Z] Copying: 1023/1024 [MB] (14 MBps) [2024-12-06T13:25:09.216Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 13:25:09.042371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.200 [2024-12-06 13:25:09.042442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:22.200 [2024-12-06 13:25:09.042493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:22.200 [2024-12-06 13:25:09.042521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.200 [2024-12-06 13:25:09.043292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:22.200 [2024-12-06 13:25:09.048720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.200 [2024-12-06 13:25:09.048777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:22.200 [2024-12-06 13:25:09.048810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.355 ms 00:30:22.200 [2024-12-06 13:25:09.048822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.200 [2024-12-06 13:25:09.061889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.200 [2024-12-06 13:25:09.061952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:22.200 [2024-12-06 13:25:09.061986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.228 ms 00:30:22.200 [2024-12-06 13:25:09.062006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.200 [2024-12-06 13:25:09.083914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.200 [2024-12-06 13:25:09.083972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:22.200 [2024-12-06 13:25:09.084005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.885 ms 00:30:22.200 [2024-12-06 13:25:09.084017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.200 [2024-12-06 13:25:09.089935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.200 [2024-12-06 13:25:09.089986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:22.200 [2024-12-06 13:25:09.090015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.880 ms 00:30:22.200 [2024-12-06 13:25:09.090035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.200 [2024-12-06 13:25:09.119764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.200 [2024-12-06 13:25:09.119837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:22.200 [2024-12-06 13:25:09.119874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.682 ms 00:30:22.200 [2024-12-06 13:25:09.119886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.200 [2024-12-06 13:25:09.136745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.200 [2024-12-06 13:25:09.136802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:22.200 [2024-12-06 13:25:09.136835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.814 ms 00:30:22.200 [2024-12-06 13:25:09.136847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.460 [2024-12-06 13:25:09.246555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.460 [2024-12-06 13:25:09.246608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:22.460 [2024-12-06 13:25:09.246628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.648 ms 00:30:22.460 [2024-12-06 13:25:09.246642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.460 [2024-12-06 13:25:09.274073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.460 [2024-12-06 13:25:09.274151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:22.460 [2024-12-06 13:25:09.274185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.408 ms 00:30:22.460 [2024-12-06 13:25:09.274196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.460 [2024-12-06 13:25:09.303628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.460 [2024-12-06 13:25:09.303675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:22.460 [2024-12-06 13:25:09.303693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.391 ms 00:30:22.460 [2024-12-06 13:25:09.303706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.460 [2024-12-06 13:25:09.334296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.460 [2024-12-06 13:25:09.334356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:22.460 [2024-12-06 13:25:09.334374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.528 ms 00:30:22.460 [2024-12-06 13:25:09.334386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.460 [2024-12-06 13:25:09.362868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.460 [2024-12-06 13:25:09.362927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:22.460 [2024-12-06 13:25:09.362960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.386 ms 00:30:22.460 [2024-12-06 13:25:09.362972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.460 [2024-12-06 13:25:09.363016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:22.460 [2024-12-06 13:25:09.363039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115456 / 261120 wr_cnt: 1 state: open 00:30:22.460 [2024-12-06 13:25:09.363054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:22.460 [2024-12-06 13:25:09.363743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.363996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:22.461 [2024-12-06 13:25:09.364349] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:22.461 [2024-12-06 13:25:09.364362] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25e71a3f-895e-4526-801e-79081fb50ab9 00:30:22.461 [2024-12-06 13:25:09.364375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115456 00:30:22.461 [2024-12-06 13:25:09.364386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116416 00:30:22.461 [2024-12-06 13:25:09.364398] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115456 00:30:22.461 [2024-12-06 13:25:09.364410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:30:22.461 [2024-12-06 13:25:09.364438] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:22.461 [2024-12-06 13:25:09.364450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:22.461 [2024-12-06 13:25:09.364463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:22.461 [2024-12-06 13:25:09.364474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:22.461 [2024-12-06 13:25:09.364485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:22.461 [2024-12-06 13:25:09.364496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.461 [2024-12-06 13:25:09.364509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:22.461 [2024-12-06 13:25:09.364535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:30:22.461 [2024-12-06 13:25:09.364547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.461 [2024-12-06 13:25:09.380788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.461 [2024-12-06 13:25:09.380843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:22.461 [2024-12-06 13:25:09.380884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.195 ms 00:30:22.461 [2024-12-06 13:25:09.380896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.461 [2024-12-06 13:25:09.381406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.461 [2024-12-06 13:25:09.381436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:22.461 [2024-12-06 13:25:09.381451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:30:22.461 [2024-12-06 13:25:09.381464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.461 [2024-12-06 13:25:09.424479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.461 [2024-12-06 13:25:09.424553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:22.461 [2024-12-06 13:25:09.424587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.461 [2024-12-06 13:25:09.424599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.461 [2024-12-06 13:25:09.424679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.461 [2024-12-06 13:25:09.424696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:22.461 [2024-12-06 13:25:09.424709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.461 [2024-12-06 13:25:09.424721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.461 [2024-12-06 13:25:09.424854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.461 [2024-12-06 13:25:09.424880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:22.461 [2024-12-06 13:25:09.424894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.461 [2024-12-06 13:25:09.424905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.461 [2024-12-06 13:25:09.424929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.461 [2024-12-06 13:25:09.424943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:22.461 [2024-12-06 13:25:09.424955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.461 [2024-12-06 13:25:09.424967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.527907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.528023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:22.721 [2024-12-06 13:25:09.528061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.528073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.613625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.613714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:22.721 [2024-12-06 13:25:09.613750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.613764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.613882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.613901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:22.721 [2024-12-06 13:25:09.613914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.613932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.613983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.613999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:22.721 [2024-12-06 13:25:09.614012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.614024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.614178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.614203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:22.721 [2024-12-06 13:25:09.614217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.614236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.614303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.614328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:22.721 [2024-12-06 13:25:09.614343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.614355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.614404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.614420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:22.721 [2024-12-06 13:25:09.614433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.614445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.614525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.721 [2024-12-06 13:25:09.614543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:22.721 [2024-12-06 13:25:09.614556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.721 [2024-12-06 13:25:09.614568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.721 [2024-12-06 13:25:09.614739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 575.981 ms, result 0 00:30:24.099 00:30:24.099 00:30:24.099 13:25:11 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:24.357 [2024-12-06 13:25:11.178647] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:30:24.357 [2024-12-06 13:25:11.178830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80743 ] 00:30:24.357 [2024-12-06 13:25:11.359852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.614 [2024-12-06 13:25:11.470887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.872 [2024-12-06 13:25:11.818641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:24.872 [2024-12-06 13:25:11.818755] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:25.130 [2024-12-06 13:25:11.983143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:11.983239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:25.130 [2024-12-06 13:25:11.983293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:25.130 [2024-12-06 13:25:11.983306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:11.983374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:11.983396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:25.130 [2024-12-06 13:25:11.983409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:25.130 [2024-12-06 13:25:11.983420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:11.983451] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:25.130 [2024-12-06 13:25:11.984400] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:25.130 [2024-12-06 13:25:11.984444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:11.984459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:25.130 [2024-12-06 13:25:11.984473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:30:25.130 [2024-12-06 13:25:11.984484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:11.986596] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:25.130 [2024-12-06 13:25:12.003359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.003423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:25.130 [2024-12-06 13:25:12.003458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.765 ms 00:30:25.130 [2024-12-06 13:25:12.003471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.003553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.003573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:25.130 [2024-12-06 13:25:12.003586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:25.130 [2024-12-06 13:25:12.003598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.012577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.012652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:25.130 [2024-12-06 13:25:12.012669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.882 ms 00:30:25.130 [2024-12-06 13:25:12.012688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.012787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.012806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:25.130 [2024-12-06 13:25:12.012818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:25.130 [2024-12-06 13:25:12.012830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.012905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.012924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:25.130 [2024-12-06 13:25:12.012938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:25.130 [2024-12-06 13:25:12.012949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.012994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:25.130 [2024-12-06 13:25:12.017953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.017994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:25.130 [2024-12-06 13:25:12.018016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.968 ms 00:30:25.130 [2024-12-06 13:25:12.018028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.018081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.018099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:25.130 [2024-12-06 13:25:12.018112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:25.130 [2024-12-06 13:25:12.018136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.018206] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:25.130 [2024-12-06 13:25:12.018242] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:25.130 [2024-12-06 13:25:12.018300] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:25.130 [2024-12-06 13:25:12.018328] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:25.130 [2024-12-06 13:25:12.018440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:25.130 [2024-12-06 13:25:12.018455] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:25.130 [2024-12-06 13:25:12.018471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:25.130 [2024-12-06 13:25:12.018486] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:25.130 [2024-12-06 13:25:12.018500] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:25.130 [2024-12-06 13:25:12.018512] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:25.130 [2024-12-06 13:25:12.018524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:25.130 [2024-12-06 13:25:12.018540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:25.130 [2024-12-06 13:25:12.018553] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:25.130 [2024-12-06 13:25:12.018566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.018578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:25.130 [2024-12-06 13:25:12.018590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:30:25.130 [2024-12-06 13:25:12.018602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.018699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.130 [2024-12-06 13:25:12.018716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:25.130 [2024-12-06 13:25:12.018729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:25.130 [2024-12-06 13:25:12.018740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.130 [2024-12-06 13:25:12.018864] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:25.130 [2024-12-06 13:25:12.018896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:25.130 [2024-12-06 13:25:12.018911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:25.130 [2024-12-06 13:25:12.018923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.131 [2024-12-06 13:25:12.018936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:25.131 [2024-12-06 13:25:12.018946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:25.131 [2024-12-06 13:25:12.018957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:25.131 [2024-12-06 13:25:12.018968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:25.131 [2024-12-06 13:25:12.018978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:25.131 [2024-12-06 13:25:12.018989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:25.131 [2024-12-06 13:25:12.019000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:25.131 [2024-12-06 13:25:12.019011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:25.131 [2024-12-06 13:25:12.019021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:25.131 [2024-12-06 13:25:12.019045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:25.131 [2024-12-06 13:25:12.019058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:25.131 [2024-12-06 13:25:12.019069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:25.131 [2024-12-06 13:25:12.019092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:25.131 [2024-12-06 13:25:12.019102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:25.131 [2024-12-06 13:25:12.019138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.131 [2024-12-06 13:25:12.019168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:25.131 [2024-12-06 13:25:12.019184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.131 [2024-12-06 13:25:12.019205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:25.131 [2024-12-06 13:25:12.019217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.131 [2024-12-06 13:25:12.019238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:25.131 [2024-12-06 13:25:12.019249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:25.131 [2024-12-06 13:25:12.019270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:25.131 [2024-12-06 13:25:12.019281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:25.131 [2024-12-06 13:25:12.019302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:25.131 [2024-12-06 13:25:12.019313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:25.131 [2024-12-06 13:25:12.019323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:25.131 [2024-12-06 13:25:12.019334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:25.131 [2024-12-06 13:25:12.019344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:25.131 [2024-12-06 13:25:12.019355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:25.131 [2024-12-06 13:25:12.019376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:25.131 [2024-12-06 13:25:12.019386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019396] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:25.131 [2024-12-06 13:25:12.019408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:25.131 [2024-12-06 13:25:12.019420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:25.131 [2024-12-06 13:25:12.019432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:25.131 [2024-12-06 13:25:12.019444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:25.131 [2024-12-06 13:25:12.019455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:25.131 [2024-12-06 13:25:12.019466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:25.131 [2024-12-06 13:25:12.019478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:25.131 [2024-12-06 13:25:12.019488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:25.131 [2024-12-06 13:25:12.019499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:25.131 [2024-12-06 13:25:12.019511] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:25.131 [2024-12-06 13:25:12.019526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:25.131 [2024-12-06 13:25:12.019545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:25.131 [2024-12-06 13:25:12.019557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:25.131 [2024-12-06 13:25:12.019568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:25.131 [2024-12-06 13:25:12.019579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:25.131 [2024-12-06 13:25:12.019590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:25.131 [2024-12-06 13:25:12.019601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:25.131 [2024-12-06 13:25:12.019613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:25.131 [2024-12-06 13:25:12.019624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:25.131 [2024-12-06 13:25:12.019636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:25.131 [2024-12-06 13:25:12.019648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:25.131 [2024-12-06 13:25:12.019659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:25.131 [2024-12-06 13:25:12.019671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:25.131 [2024-12-06 13:25:12.019682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:25.131 [2024-12-06 13:25:12.019694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:25.131 [2024-12-06 13:25:12.019705] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:25.131 [2024-12-06 13:25:12.019717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:25.131 [2024-12-06 13:25:12.019730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:25.131 [2024-12-06 13:25:12.019742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:25.131 [2024-12-06 13:25:12.019753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:25.131 [2024-12-06 13:25:12.019774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:25.131 [2024-12-06 13:25:12.019787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.019799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:25.131 [2024-12-06 13:25:12.019811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:30:25.131 [2024-12-06 13:25:12.019824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.131 [2024-12-06 13:25:12.059312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.059376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:25.131 [2024-12-06 13:25:12.059413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.418 ms 00:30:25.131 [2024-12-06 13:25:12.059432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.131 [2024-12-06 13:25:12.059550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.059566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:25.131 [2024-12-06 13:25:12.059579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:25.131 [2024-12-06 13:25:12.059606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.131 [2024-12-06 13:25:12.117357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.117421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.131 [2024-12-06 13:25:12.117458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.650 ms 00:30:25.131 [2024-12-06 13:25:12.117472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.131 [2024-12-06 13:25:12.117554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.117572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:25.131 [2024-12-06 13:25:12.117592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:25.131 [2024-12-06 13:25:12.117604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.131 [2024-12-06 13:25:12.118294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.118335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:25.131 [2024-12-06 13:25:12.118351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:30:25.131 [2024-12-06 13:25:12.118364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.131 [2024-12-06 13:25:12.118543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.118564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:25.131 [2024-12-06 13:25:12.118585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:30:25.131 [2024-12-06 13:25:12.118597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.131 [2024-12-06 13:25:12.138742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.131 [2024-12-06 13:25:12.138797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:25.131 [2024-12-06 13:25:12.138816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.115 ms 00:30:25.131 [2024-12-06 13:25:12.138829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.389 [2024-12-06 13:25:12.156067] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:25.389 [2024-12-06 13:25:12.156162] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:25.389 [2024-12-06 13:25:12.156200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.389 [2024-12-06 13:25:12.156214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:25.389 [2024-12-06 13:25:12.156227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.226 ms 00:30:25.389 [2024-12-06 13:25:12.156239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.389 [2024-12-06 13:25:12.190038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.389 [2024-12-06 13:25:12.190098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:25.389 [2024-12-06 13:25:12.190131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.749 ms 00:30:25.389 [2024-12-06 13:25:12.190171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.389 [2024-12-06 13:25:12.205799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.389 [2024-12-06 13:25:12.205855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:25.389 [2024-12-06 13:25:12.205887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.572 ms 00:30:25.390 [2024-12-06 13:25:12.205899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.220374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.220417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:25.390 [2024-12-06 13:25:12.220450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.430 ms 00:30:25.390 [2024-12-06 13:25:12.220461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.221447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.221500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:25.390 [2024-12-06 13:25:12.221521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:30:25.390 [2024-12-06 13:25:12.221533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.296688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.296783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:25.390 [2024-12-06 13:25:12.296828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.121 ms 00:30:25.390 [2024-12-06 13:25:12.296852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.308880] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:25.390 [2024-12-06 13:25:12.311507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.311562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:25.390 [2024-12-06 13:25:12.311595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.588 ms 00:30:25.390 [2024-12-06 13:25:12.311606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.311707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.311728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:25.390 [2024-12-06 13:25:12.311745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:25.390 [2024-12-06 13:25:12.311756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.313819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.313870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:25.390 [2024-12-06 13:25:12.313902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:30:25.390 [2024-12-06 13:25:12.313925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.313961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.313977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:25.390 [2024-12-06 13:25:12.313989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:25.390 [2024-12-06 13:25:12.314000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.314047] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:25.390 [2024-12-06 13:25:12.314063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.314074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:25.390 [2024-12-06 13:25:12.314085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:25.390 [2024-12-06 13:25:12.314112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.343290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.343366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:25.390 [2024-12-06 13:25:12.343409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.137 ms 00:30:25.390 [2024-12-06 13:25:12.343422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.343505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.390 [2024-12-06 13:25:12.343523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:25.390 [2024-12-06 13:25:12.343536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:25.390 [2024-12-06 13:25:12.343546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.390 [2024-12-06 13:25:12.345717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.356 ms, result 0 00:30:26.767  [2024-12-06T13:25:14.719Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T13:25:15.652Z] Copying: 47/1024 [MB] (24 MBps) [2024-12-06T13:25:16.588Z] Copying: 72/1024 [MB] (25 MBps) [2024-12-06T13:25:17.966Z] Copying: 98/1024 [MB] (25 MBps) [2024-12-06T13:25:18.902Z] Copying: 123/1024 [MB] (25 MBps) [2024-12-06T13:25:19.838Z] Copying: 149/1024 [MB] (25 MBps) [2024-12-06T13:25:20.791Z] Copying: 174/1024 [MB] (25 MBps) [2024-12-06T13:25:21.733Z] Copying: 200/1024 [MB] (25 MBps) [2024-12-06T13:25:22.667Z] Copying: 226/1024 [MB] (25 MBps) [2024-12-06T13:25:23.600Z] Copying: 251/1024 [MB] (25 MBps) [2024-12-06T13:25:24.974Z] Copying: 276/1024 [MB] (25 MBps) [2024-12-06T13:25:25.905Z] Copying: 302/1024 [MB] (25 MBps) [2024-12-06T13:25:26.838Z] Copying: 326/1024 [MB] (24 MBps) [2024-12-06T13:25:27.774Z] Copying: 352/1024 [MB] (25 MBps) [2024-12-06T13:25:28.708Z] Copying: 376/1024 [MB] (24 MBps) [2024-12-06T13:25:29.643Z] Copying: 401/1024 [MB] (24 MBps) [2024-12-06T13:25:30.579Z] Copying: 426/1024 [MB] (24 MBps) [2024-12-06T13:25:31.957Z] Copying: 451/1024 [MB] (25 MBps) [2024-12-06T13:25:32.892Z] Copying: 476/1024 [MB] (25 MBps) [2024-12-06T13:25:33.827Z] Copying: 501/1024 [MB] (24 MBps) [2024-12-06T13:25:34.765Z] Copying: 526/1024 [MB] (24 MBps) [2024-12-06T13:25:35.702Z] Copying: 549/1024 [MB] (23 MBps) [2024-12-06T13:25:36.637Z] Copying: 573/1024 [MB] (23 MBps) [2024-12-06T13:25:37.573Z] Copying: 596/1024 [MB] (23 MBps) [2024-12-06T13:25:38.950Z] Copying: 620/1024 [MB] (24 MBps) [2024-12-06T13:25:39.886Z] Copying: 644/1024 [MB] (23 MBps) [2024-12-06T13:25:40.821Z] Copying: 667/1024 [MB] (23 MBps) [2024-12-06T13:25:41.753Z] Copying: 691/1024 [MB] (24 MBps) [2024-12-06T13:25:42.686Z] Copying: 715/1024 [MB] (23 MBps) [2024-12-06T13:25:43.619Z] Copying: 739/1024 [MB] (23 MBps) [2024-12-06T13:25:44.993Z] Copying: 764/1024 [MB] (25 MBps) [2024-12-06T13:25:45.924Z] Copying: 788/1024 [MB] (24 MBps) [2024-12-06T13:25:46.858Z] Copying: 813/1024 [MB] (24 MBps) [2024-12-06T13:25:47.792Z] Copying: 837/1024 [MB] (23 MBps) [2024-12-06T13:25:48.727Z] Copying: 860/1024 [MB] (23 MBps) [2024-12-06T13:25:49.662Z] Copying: 884/1024 [MB] (23 MBps) [2024-12-06T13:25:50.598Z] Copying: 908/1024 [MB] (23 MBps) [2024-12-06T13:25:51.975Z] Copying: 931/1024 [MB] (23 MBps) [2024-12-06T13:25:52.911Z] Copying: 955/1024 [MB] (23 MBps) [2024-12-06T13:25:53.848Z] Copying: 977/1024 [MB] (22 MBps) [2024-12-06T13:25:54.793Z] Copying: 1002/1024 [MB] (24 MBps) [2024-12-06T13:25:54.793Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-06 13:25:54.622722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.777 [2024-12-06 13:25:54.622803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:07.777 [2024-12-06 13:25:54.622832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:07.777 [2024-12-06 13:25:54.622846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.777 [2024-12-06 13:25:54.622881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:07.777 [2024-12-06 13:25:54.627068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.777 [2024-12-06 13:25:54.627108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:07.777 [2024-12-06 13:25:54.627149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:31:07.777 [2024-12-06 13:25:54.627163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.777 [2024-12-06 13:25:54.627417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.777 [2024-12-06 13:25:54.627445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:07.777 [2024-12-06 13:25:54.627460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:31:07.777 [2024-12-06 13:25:54.627478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.777 [2024-12-06 13:25:54.632456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.777 [2024-12-06 13:25:54.632514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:07.777 [2024-12-06 13:25:54.632532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.954 ms 00:31:07.777 [2024-12-06 13:25:54.632544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.777 [2024-12-06 13:25:54.638778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.777 [2024-12-06 13:25:54.638833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:07.777 [2024-12-06 13:25:54.638848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:31:07.778 [2024-12-06 13:25:54.638866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.778 [2024-12-06 13:25:54.669200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.778 [2024-12-06 13:25:54.669242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:07.778 [2024-12-06 13:25:54.669260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.262 ms 00:31:07.778 [2024-12-06 13:25:54.669273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.778 [2024-12-06 13:25:54.686464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.778 [2024-12-06 13:25:54.686525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:07.778 [2024-12-06 13:25:54.686559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.145 ms 00:31:07.778 [2024-12-06 13:25:54.686571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.038 [2024-12-06 13:25:54.806289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.038 [2024-12-06 13:25:54.806369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:08.038 [2024-12-06 13:25:54.806389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.639 ms 00:31:08.038 [2024-12-06 13:25:54.806402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.038 [2024-12-06 13:25:54.835359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.038 [2024-12-06 13:25:54.835418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:08.038 [2024-12-06 13:25:54.835450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.934 ms 00:31:08.038 [2024-12-06 13:25:54.835462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.038 [2024-12-06 13:25:54.866503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.038 [2024-12-06 13:25:54.866551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:08.038 [2024-12-06 13:25:54.866568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.984 ms 00:31:08.038 [2024-12-06 13:25:54.866580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.038 [2024-12-06 13:25:54.896051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.038 [2024-12-06 13:25:54.896111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:08.038 [2024-12-06 13:25:54.896151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.426 ms 00:31:08.038 [2024-12-06 13:25:54.896164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.038 [2024-12-06 13:25:54.923977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.038 [2024-12-06 13:25:54.924037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:08.039 [2024-12-06 13:25:54.924069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.727 ms 00:31:08.039 [2024-12-06 13:25:54.924080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.039 [2024-12-06 13:25:54.924122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:08.039 [2024-12-06 13:25:54.924156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:08.039 [2024-12-06 13:25:54.924171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.924990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:08.039 [2024-12-06 13:25:54.925130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:08.040 [2024-12-06 13:25:54.925392] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:08.040 [2024-12-06 13:25:54.925404] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25e71a3f-895e-4526-801e-79081fb50ab9 00:31:08.040 [2024-12-06 13:25:54.925416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:08.040 [2024-12-06 13:25:54.925428] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16576 00:31:08.040 [2024-12-06 13:25:54.925440] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15616 00:31:08.040 [2024-12-06 13:25:54.925452] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0615 00:31:08.040 [2024-12-06 13:25:54.925470] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:08.040 [2024-12-06 13:25:54.925494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:08.040 [2024-12-06 13:25:54.925506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:08.040 [2024-12-06 13:25:54.925516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:08.040 [2024-12-06 13:25:54.925527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:08.040 [2024-12-06 13:25:54.925538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.040 [2024-12-06 13:25:54.925550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:08.040 [2024-12-06 13:25:54.925562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:31:08.040 [2024-12-06 13:25:54.925574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.040 [2024-12-06 13:25:54.941617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.040 [2024-12-06 13:25:54.941674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:08.040 [2024-12-06 13:25:54.941714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.988 ms 00:31:08.040 [2024-12-06 13:25:54.941725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.040 [2024-12-06 13:25:54.942248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.040 [2024-12-06 13:25:54.942287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:08.040 [2024-12-06 13:25:54.942303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:31:08.040 [2024-12-06 13:25:54.942315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.040 [2024-12-06 13:25:54.983437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.040 [2024-12-06 13:25:54.983505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:08.040 [2024-12-06 13:25:54.983537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.040 [2024-12-06 13:25:54.983549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.040 [2024-12-06 13:25:54.983611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.040 [2024-12-06 13:25:54.983626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:08.040 [2024-12-06 13:25:54.983638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.040 [2024-12-06 13:25:54.983649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.040 [2024-12-06 13:25:54.983722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.040 [2024-12-06 13:25:54.983758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:08.040 [2024-12-06 13:25:54.983792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.040 [2024-12-06 13:25:54.983803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.040 [2024-12-06 13:25:54.983826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.040 [2024-12-06 13:25:54.983840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:08.040 [2024-12-06 13:25:54.983851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.040 [2024-12-06 13:25:54.983863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.306 [2024-12-06 13:25:55.083357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.306 [2024-12-06 13:25:55.083468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:08.307 [2024-12-06 13:25:55.083504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.307 [2024-12-06 13:25:55.083517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.307 [2024-12-06 13:25:55.170730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.307 [2024-12-06 13:25:55.170837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:08.307 [2024-12-06 13:25:55.170873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.307 [2024-12-06 13:25:55.170886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.307 [2024-12-06 13:25:55.171012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.307 [2024-12-06 13:25:55.171031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:08.307 [2024-12-06 13:25:55.171044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.307 [2024-12-06 13:25:55.171062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.307 [2024-12-06 13:25:55.171114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.307 [2024-12-06 13:25:55.171149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:08.307 [2024-12-06 13:25:55.171164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.307 [2024-12-06 13:25:55.171177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.307 [2024-12-06 13:25:55.171313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.307 [2024-12-06 13:25:55.171334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:08.307 [2024-12-06 13:25:55.171347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.307 [2024-12-06 13:25:55.171358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.307 [2024-12-06 13:25:55.171417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.307 [2024-12-06 13:25:55.171436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:08.307 [2024-12-06 13:25:55.171448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.307 [2024-12-06 13:25:55.171460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.307 [2024-12-06 13:25:55.171519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.307 [2024-12-06 13:25:55.171542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:08.308 [2024-12-06 13:25:55.171556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.308 [2024-12-06 13:25:55.171567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.308 [2024-12-06 13:25:55.171631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.308 [2024-12-06 13:25:55.171648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:08.308 [2024-12-06 13:25:55.171661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.308 [2024-12-06 13:25:55.171672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.308 [2024-12-06 13:25:55.171836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.067 ms, result 0 00:31:09.263 00:31:09.263 00:31:09.263 13:25:56 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:11.810 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:11.810 Process with pid 79155 is not found 00:31:11.810 Remove shared memory files 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79155 00:31:11.810 13:25:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79155 ']' 00:31:11.810 13:25:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79155 00:31:11.810 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79155) - No such process 00:31:11.810 13:25:58 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79155 is not found' 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:11.810 13:25:58 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:31:11.810 00:31:11.810 real 3m25.158s 00:31:11.810 user 3m10.819s 00:31:11.810 sys 0m16.476s 00:31:11.810 13:25:58 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.810 ************************************ 00:31:11.810 13:25:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:11.810 END TEST ftl_restore 00:31:11.810 ************************************ 00:31:11.810 13:25:58 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:11.810 13:25:58 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:11.810 13:25:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.810 13:25:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:11.810 ************************************ 00:31:11.810 START TEST ftl_dirty_shutdown 00:31:11.810 ************************************ 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:11.810 * Looking for test storage... 00:31:11.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:11.810 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.811 --rc genhtml_branch_coverage=1 00:31:11.811 --rc genhtml_function_coverage=1 00:31:11.811 --rc genhtml_legend=1 00:31:11.811 --rc geninfo_all_blocks=1 00:31:11.811 --rc geninfo_unexecuted_blocks=1 00:31:11.811 00:31:11.811 ' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.811 --rc genhtml_branch_coverage=1 00:31:11.811 --rc genhtml_function_coverage=1 00:31:11.811 --rc genhtml_legend=1 00:31:11.811 --rc geninfo_all_blocks=1 00:31:11.811 --rc geninfo_unexecuted_blocks=1 00:31:11.811 00:31:11.811 ' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.811 --rc genhtml_branch_coverage=1 00:31:11.811 --rc genhtml_function_coverage=1 00:31:11.811 --rc genhtml_legend=1 00:31:11.811 --rc geninfo_all_blocks=1 00:31:11.811 --rc geninfo_unexecuted_blocks=1 00:31:11.811 00:31:11.811 ' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.811 --rc genhtml_branch_coverage=1 00:31:11.811 --rc genhtml_function_coverage=1 00:31:11.811 --rc genhtml_legend=1 00:31:11.811 --rc geninfo_all_blocks=1 00:31:11.811 --rc geninfo_unexecuted_blocks=1 00:31:11.811 00:31:11.811 ' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81281 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81281 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81281 ']' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:11.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:11.811 13:25:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:11.811 [2024-12-06 13:25:58.766628] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:31:11.811 [2024-12-06 13:25:58.767274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81281 ] 00:31:12.069 [2024-12-06 13:25:58.944637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.069 [2024-12-06 13:25:59.076534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:13.005 13:25:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:13.573 { 00:31:13.573 "name": "nvme0n1", 00:31:13.573 "aliases": [ 00:31:13.573 "0b6fdfff-91aa-4a95-a29a-a26f8b6eef83" 00:31:13.573 ], 00:31:13.573 "product_name": "NVMe disk", 00:31:13.573 "block_size": 4096, 00:31:13.573 "num_blocks": 1310720, 00:31:13.573 "uuid": "0b6fdfff-91aa-4a95-a29a-a26f8b6eef83", 00:31:13.573 "numa_id": -1, 00:31:13.573 "assigned_rate_limits": { 00:31:13.573 "rw_ios_per_sec": 0, 00:31:13.573 "rw_mbytes_per_sec": 0, 00:31:13.573 "r_mbytes_per_sec": 0, 00:31:13.573 "w_mbytes_per_sec": 0 00:31:13.573 }, 00:31:13.573 "claimed": true, 00:31:13.573 "claim_type": "read_many_write_one", 00:31:13.573 "zoned": false, 00:31:13.573 "supported_io_types": { 00:31:13.573 "read": true, 00:31:13.573 "write": true, 00:31:13.573 "unmap": true, 00:31:13.573 "flush": true, 00:31:13.573 "reset": true, 00:31:13.573 "nvme_admin": true, 00:31:13.573 "nvme_io": true, 00:31:13.573 "nvme_io_md": false, 00:31:13.573 "write_zeroes": true, 00:31:13.573 "zcopy": false, 00:31:13.573 "get_zone_info": false, 00:31:13.573 "zone_management": false, 00:31:13.573 "zone_append": false, 00:31:13.573 "compare": true, 00:31:13.573 "compare_and_write": false, 00:31:13.573 "abort": true, 00:31:13.573 "seek_hole": false, 00:31:13.573 "seek_data": false, 00:31:13.573 "copy": true, 00:31:13.573 "nvme_iov_md": false 00:31:13.573 }, 00:31:13.573 "driver_specific": { 00:31:13.573 "nvme": [ 00:31:13.573 { 00:31:13.573 "pci_address": "0000:00:11.0", 00:31:13.573 "trid": { 00:31:13.573 "trtype": "PCIe", 00:31:13.573 "traddr": "0000:00:11.0" 00:31:13.573 }, 00:31:13.573 "ctrlr_data": { 00:31:13.573 "cntlid": 0, 00:31:13.573 "vendor_id": "0x1b36", 00:31:13.573 "model_number": "QEMU NVMe Ctrl", 00:31:13.573 "serial_number": "12341", 00:31:13.573 "firmware_revision": "8.0.0", 00:31:13.573 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:13.573 "oacs": { 00:31:13.573 "security": 0, 00:31:13.573 "format": 1, 00:31:13.573 "firmware": 0, 00:31:13.573 "ns_manage": 1 00:31:13.573 }, 00:31:13.573 "multi_ctrlr": false, 00:31:13.573 "ana_reporting": false 00:31:13.573 }, 00:31:13.573 "vs": { 00:31:13.573 "nvme_version": "1.4" 00:31:13.573 }, 00:31:13.573 "ns_data": { 00:31:13.573 "id": 1, 00:31:13.573 "can_share": false 00:31:13.573 } 00:31:13.573 } 00:31:13.573 ], 00:31:13.573 "mp_policy": "active_passive" 00:31:13.573 } 00:31:13.573 } 00:31:13.573 ]' 00:31:13.573 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:13.832 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:14.091 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=09c5d382-45b8-447b-8f8d-f1d5c58dd77d 00:31:14.091 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:14.091 13:26:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09c5d382-45b8-447b-8f8d-f1d5c58dd77d 00:31:14.350 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:14.608 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8627b79b-036b-4011-8234-993d4ad9ef5f 00:31:14.608 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8627b79b-036b-4011-8234-993d4ad9ef5f 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=bbb34f7d-acb7-4315-af36-f75e52245217 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bbb34f7d-acb7-4315-af36-f75e52245217 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=bbb34f7d-acb7-4315-af36-f75e52245217 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size bbb34f7d-acb7-4315-af36-f75e52245217 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bbb34f7d-acb7-4315-af36-f75e52245217 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:14.866 13:26:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bbb34f7d-acb7-4315-af36-f75e52245217 00:31:15.124 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:15.124 { 00:31:15.124 "name": "bbb34f7d-acb7-4315-af36-f75e52245217", 00:31:15.124 "aliases": [ 00:31:15.124 "lvs/nvme0n1p0" 00:31:15.124 ], 00:31:15.124 "product_name": "Logical Volume", 00:31:15.124 "block_size": 4096, 00:31:15.124 "num_blocks": 26476544, 00:31:15.124 "uuid": "bbb34f7d-acb7-4315-af36-f75e52245217", 00:31:15.124 "assigned_rate_limits": { 00:31:15.124 "rw_ios_per_sec": 0, 00:31:15.124 "rw_mbytes_per_sec": 0, 00:31:15.124 "r_mbytes_per_sec": 0, 00:31:15.124 "w_mbytes_per_sec": 0 00:31:15.124 }, 00:31:15.124 "claimed": false, 00:31:15.124 "zoned": false, 00:31:15.124 "supported_io_types": { 00:31:15.124 "read": true, 00:31:15.124 "write": true, 00:31:15.124 "unmap": true, 00:31:15.124 "flush": false, 00:31:15.124 "reset": true, 00:31:15.124 "nvme_admin": false, 00:31:15.124 "nvme_io": false, 00:31:15.124 "nvme_io_md": false, 00:31:15.124 "write_zeroes": true, 00:31:15.124 "zcopy": false, 00:31:15.124 "get_zone_info": false, 00:31:15.124 "zone_management": false, 00:31:15.124 "zone_append": false, 00:31:15.124 "compare": false, 00:31:15.124 "compare_and_write": false, 00:31:15.124 "abort": false, 00:31:15.124 "seek_hole": true, 00:31:15.124 "seek_data": true, 00:31:15.124 "copy": false, 00:31:15.124 "nvme_iov_md": false 00:31:15.124 }, 00:31:15.124 "driver_specific": { 00:31:15.124 "lvol": { 00:31:15.124 "lvol_store_uuid": "8627b79b-036b-4011-8234-993d4ad9ef5f", 00:31:15.124 "base_bdev": "nvme0n1", 00:31:15.124 "thin_provision": true, 00:31:15.124 "num_allocated_clusters": 0, 00:31:15.124 "snapshot": false, 00:31:15.124 "clone": false, 00:31:15.124 "esnap_clone": false 00:31:15.124 } 00:31:15.124 } 00:31:15.124 } 00:31:15.124 ]' 00:31:15.124 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:15.124 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:15.124 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:15.381 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:15.381 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:15.381 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:15.381 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:31:15.382 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:15.382 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size bbb34f7d-acb7-4315-af36-f75e52245217 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bbb34f7d-acb7-4315-af36-f75e52245217 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:15.640 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bbb34f7d-acb7-4315-af36-f75e52245217 00:31:15.898 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:15.898 { 00:31:15.898 "name": "bbb34f7d-acb7-4315-af36-f75e52245217", 00:31:15.898 "aliases": [ 00:31:15.898 "lvs/nvme0n1p0" 00:31:15.898 ], 00:31:15.898 "product_name": "Logical Volume", 00:31:15.898 "block_size": 4096, 00:31:15.898 "num_blocks": 26476544, 00:31:15.898 "uuid": "bbb34f7d-acb7-4315-af36-f75e52245217", 00:31:15.898 "assigned_rate_limits": { 00:31:15.898 "rw_ios_per_sec": 0, 00:31:15.898 "rw_mbytes_per_sec": 0, 00:31:15.898 "r_mbytes_per_sec": 0, 00:31:15.898 "w_mbytes_per_sec": 0 00:31:15.898 }, 00:31:15.898 "claimed": false, 00:31:15.898 "zoned": false, 00:31:15.898 "supported_io_types": { 00:31:15.898 "read": true, 00:31:15.898 "write": true, 00:31:15.898 "unmap": true, 00:31:15.898 "flush": false, 00:31:15.898 "reset": true, 00:31:15.898 "nvme_admin": false, 00:31:15.898 "nvme_io": false, 00:31:15.898 "nvme_io_md": false, 00:31:15.898 "write_zeroes": true, 00:31:15.898 "zcopy": false, 00:31:15.898 "get_zone_info": false, 00:31:15.898 "zone_management": false, 00:31:15.898 "zone_append": false, 00:31:15.898 "compare": false, 00:31:15.898 "compare_and_write": false, 00:31:15.898 "abort": false, 00:31:15.898 "seek_hole": true, 00:31:15.898 "seek_data": true, 00:31:15.898 "copy": false, 00:31:15.899 "nvme_iov_md": false 00:31:15.899 }, 00:31:15.899 "driver_specific": { 00:31:15.899 "lvol": { 00:31:15.899 "lvol_store_uuid": "8627b79b-036b-4011-8234-993d4ad9ef5f", 00:31:15.899 "base_bdev": "nvme0n1", 00:31:15.899 "thin_provision": true, 00:31:15.899 "num_allocated_clusters": 0, 00:31:15.899 "snapshot": false, 00:31:15.899 "clone": false, 00:31:15.899 "esnap_clone": false 00:31:15.899 } 00:31:15.899 } 00:31:15.899 } 00:31:15.899 ]' 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:31:15.899 13:26:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:16.157 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:31:16.157 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size bbb34f7d-acb7-4315-af36-f75e52245217 00:31:16.157 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bbb34f7d-acb7-4315-af36-f75e52245217 00:31:16.157 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:16.157 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:16.157 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:16.157 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bbb34f7d-acb7-4315-af36-f75e52245217 00:31:16.415 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:16.415 { 00:31:16.415 "name": "bbb34f7d-acb7-4315-af36-f75e52245217", 00:31:16.415 "aliases": [ 00:31:16.415 "lvs/nvme0n1p0" 00:31:16.415 ], 00:31:16.415 "product_name": "Logical Volume", 00:31:16.415 "block_size": 4096, 00:31:16.415 "num_blocks": 26476544, 00:31:16.415 "uuid": "bbb34f7d-acb7-4315-af36-f75e52245217", 00:31:16.415 "assigned_rate_limits": { 00:31:16.415 "rw_ios_per_sec": 0, 00:31:16.415 "rw_mbytes_per_sec": 0, 00:31:16.415 "r_mbytes_per_sec": 0, 00:31:16.415 "w_mbytes_per_sec": 0 00:31:16.415 }, 00:31:16.415 "claimed": false, 00:31:16.415 "zoned": false, 00:31:16.415 "supported_io_types": { 00:31:16.415 "read": true, 00:31:16.415 "write": true, 00:31:16.415 "unmap": true, 00:31:16.415 "flush": false, 00:31:16.415 "reset": true, 00:31:16.415 "nvme_admin": false, 00:31:16.415 "nvme_io": false, 00:31:16.415 "nvme_io_md": false, 00:31:16.415 "write_zeroes": true, 00:31:16.415 "zcopy": false, 00:31:16.415 "get_zone_info": false, 00:31:16.415 "zone_management": false, 00:31:16.415 "zone_append": false, 00:31:16.415 "compare": false, 00:31:16.415 "compare_and_write": false, 00:31:16.415 "abort": false, 00:31:16.415 "seek_hole": true, 00:31:16.415 "seek_data": true, 00:31:16.415 "copy": false, 00:31:16.415 "nvme_iov_md": false 00:31:16.415 }, 00:31:16.415 "driver_specific": { 00:31:16.415 "lvol": { 00:31:16.415 "lvol_store_uuid": "8627b79b-036b-4011-8234-993d4ad9ef5f", 00:31:16.415 "base_bdev": "nvme0n1", 00:31:16.415 "thin_provision": true, 00:31:16.415 "num_allocated_clusters": 0, 00:31:16.415 "snapshot": false, 00:31:16.415 "clone": false, 00:31:16.415 "esnap_clone": false 00:31:16.415 } 00:31:16.415 } 00:31:16.415 } 00:31:16.415 ]' 00:31:16.415 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bbb34f7d-acb7-4315-af36-f75e52245217 --l2p_dram_limit 10' 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:16.672 13:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bbb34f7d-acb7-4315-af36-f75e52245217 --l2p_dram_limit 10 -c nvc0n1p0 00:31:16.929 [2024-12-06 13:26:03.737374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.929 [2024-12-06 13:26:03.737438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:16.929 [2024-12-06 13:26:03.737463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:16.929 [2024-12-06 13:26:03.737476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.737569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.737587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:16.930 [2024-12-06 13:26:03.737603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:16.930 [2024-12-06 13:26:03.737615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.737656] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:16.930 [2024-12-06 13:26:03.738708] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:16.930 [2024-12-06 13:26:03.738751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.738765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:16.930 [2024-12-06 13:26:03.738781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:31:16.930 [2024-12-06 13:26:03.738793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.739010] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 104883f5-0a7e-4c98-bc4c-b46162b2b89d 00:31:16.930 [2024-12-06 13:26:03.740839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.740899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:16.930 [2024-12-06 13:26:03.740916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:16.930 [2024-12-06 13:26:03.740930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.750918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.751002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:16.930 [2024-12-06 13:26:03.751019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.920 ms 00:31:16.930 [2024-12-06 13:26:03.751035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.751179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.751204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:16.930 [2024-12-06 13:26:03.751218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:31:16.930 [2024-12-06 13:26:03.751238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.751356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.751401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:16.930 [2024-12-06 13:26:03.751419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:16.930 [2024-12-06 13:26:03.751434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.751469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:16.930 [2024-12-06 13:26:03.756720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.756775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:16.930 [2024-12-06 13:26:03.756812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:31:16.930 [2024-12-06 13:26:03.756825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.756874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.756889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:16.930 [2024-12-06 13:26:03.756904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:16.930 [2024-12-06 13:26:03.756915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.756965] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:16.930 [2024-12-06 13:26:03.757128] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:16.930 [2024-12-06 13:26:03.757182] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:16.930 [2024-12-06 13:26:03.757201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:16.930 [2024-12-06 13:26:03.757219] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757233] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757248] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:16.930 [2024-12-06 13:26:03.757260] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:16.930 [2024-12-06 13:26:03.757279] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:16.930 [2024-12-06 13:26:03.757290] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:16.930 [2024-12-06 13:26:03.757314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.757337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:16.930 [2024-12-06 13:26:03.757353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:31:16.930 [2024-12-06 13:26:03.757365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.757466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.930 [2024-12-06 13:26:03.757486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:16.930 [2024-12-06 13:26:03.757501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:16.930 [2024-12-06 13:26:03.757513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.930 [2024-12-06 13:26:03.757631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:16.930 [2024-12-06 13:26:03.757654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:16.930 [2024-12-06 13:26:03.757671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:16.930 [2024-12-06 13:26:03.757708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:16.930 [2024-12-06 13:26:03.757747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:16.930 [2024-12-06 13:26:03.757772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:16.930 [2024-12-06 13:26:03.757783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:16.930 [2024-12-06 13:26:03.757798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:16.930 [2024-12-06 13:26:03.757809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:16.930 [2024-12-06 13:26:03.757823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:16.930 [2024-12-06 13:26:03.757833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:16.930 [2024-12-06 13:26:03.757860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:16.930 [2024-12-06 13:26:03.757897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:16.930 [2024-12-06 13:26:03.757931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:16.930 [2024-12-06 13:26:03.757968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:16.930 [2024-12-06 13:26:03.757978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:16.930 [2024-12-06 13:26:03.757991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:16.930 [2024-12-06 13:26:03.758002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:16.930 [2024-12-06 13:26:03.758014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:16.930 [2024-12-06 13:26:03.758025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:16.930 [2024-12-06 13:26:03.758041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:16.930 [2024-12-06 13:26:03.758052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:16.930 [2024-12-06 13:26:03.758065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:16.930 [2024-12-06 13:26:03.758076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:16.930 [2024-12-06 13:26:03.758090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:16.930 [2024-12-06 13:26:03.758100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:16.930 [2024-12-06 13:26:03.758115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:16.930 [2024-12-06 13:26:03.758139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:16.930 [2024-12-06 13:26:03.758155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:16.930 [2024-12-06 13:26:03.758167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:16.930 [2024-12-06 13:26:03.758181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:16.930 [2024-12-06 13:26:03.758191] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:16.930 [2024-12-06 13:26:03.758207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:16.930 [2024-12-06 13:26:03.758218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:16.930 [2024-12-06 13:26:03.758232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:16.930 [2024-12-06 13:26:03.758244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:16.930 [2024-12-06 13:26:03.758260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:16.930 [2024-12-06 13:26:03.758271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:16.931 [2024-12-06 13:26:03.758295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:16.931 [2024-12-06 13:26:03.758307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:16.931 [2024-12-06 13:26:03.758321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:16.931 [2024-12-06 13:26:03.758334] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:16.931 [2024-12-06 13:26:03.758355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:16.931 [2024-12-06 13:26:03.758368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:16.931 [2024-12-06 13:26:03.758382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:16.931 [2024-12-06 13:26:03.758394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:16.931 [2024-12-06 13:26:03.758408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:16.931 [2024-12-06 13:26:03.758420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:16.931 [2024-12-06 13:26:03.758434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:16.931 [2024-12-06 13:26:03.758450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:16.931 [2024-12-06 13:26:03.758464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:16.931 [2024-12-06 13:26:03.758476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:16.931 [2024-12-06 13:26:03.758494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:16.931 [2024-12-06 13:26:03.758506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:16.931 [2024-12-06 13:26:03.758520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:16.931 [2024-12-06 13:26:03.758531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:16.931 [2024-12-06 13:26:03.758546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:16.931 [2024-12-06 13:26:03.758557] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:16.931 [2024-12-06 13:26:03.758572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:16.931 [2024-12-06 13:26:03.758585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:16.931 [2024-12-06 13:26:03.758602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:16.931 [2024-12-06 13:26:03.758615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:16.931 [2024-12-06 13:26:03.758629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:16.931 [2024-12-06 13:26:03.758642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.931 [2024-12-06 13:26:03.758656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:16.931 [2024-12-06 13:26:03.758668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:31:16.931 [2024-12-06 13:26:03.758683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.931 [2024-12-06 13:26:03.758740] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:16.931 [2024-12-06 13:26:03.758768] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:19.461 [2024-12-06 13:26:06.355098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.355268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:19.461 [2024-12-06 13:26:06.355295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2596.370 ms 00:31:19.461 [2024-12-06 13:26:06.355316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.461 [2024-12-06 13:26:06.396445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.396515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:19.461 [2024-12-06 13:26:06.396541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.817 ms 00:31:19.461 [2024-12-06 13:26:06.396576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.461 [2024-12-06 13:26:06.396784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.396828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:19.461 [2024-12-06 13:26:06.396849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:19.461 [2024-12-06 13:26:06.396876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.461 [2024-12-06 13:26:06.443445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.443544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.461 [2024-12-06 13:26:06.443585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.455 ms 00:31:19.461 [2024-12-06 13:26:06.443605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.461 [2024-12-06 13:26:06.443664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.443697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.461 [2024-12-06 13:26:06.443714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:19.461 [2024-12-06 13:26:06.443748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.461 [2024-12-06 13:26:06.444437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.444475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.461 [2024-12-06 13:26:06.444529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:31:19.461 [2024-12-06 13:26:06.444547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.461 [2024-12-06 13:26:06.444723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.444755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.461 [2024-12-06 13:26:06.444775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:31:19.461 [2024-12-06 13:26:06.444796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.461 [2024-12-06 13:26:06.466411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.461 [2024-12-06 13:26:06.466475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.461 [2024-12-06 13:26:06.466497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.581 ms 00:31:19.461 [2024-12-06 13:26:06.466516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.719 [2024-12-06 13:26:06.495120] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:19.719 [2024-12-06 13:26:06.499642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.719 [2024-12-06 13:26:06.499711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:19.719 [2024-12-06 13:26:06.499754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.985 ms 00:31:19.719 [2024-12-06 13:26:06.499769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.719 [2024-12-06 13:26:06.576438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.719 [2024-12-06 13:26:06.576557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:19.719 [2024-12-06 13:26:06.576605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.618 ms 00:31:19.719 [2024-12-06 13:26:06.576623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.719 [2024-12-06 13:26:06.576899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.719 [2024-12-06 13:26:06.576927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:19.719 [2024-12-06 13:26:06.576952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:31:19.719 [2024-12-06 13:26:06.576968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.719 [2024-12-06 13:26:06.608412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.720 [2024-12-06 13:26:06.608481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:19.720 [2024-12-06 13:26:06.608507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.363 ms 00:31:19.720 [2024-12-06 13:26:06.608523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.720 [2024-12-06 13:26:06.638228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.720 [2024-12-06 13:26:06.638318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:19.720 [2024-12-06 13:26:06.638363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.641 ms 00:31:19.720 [2024-12-06 13:26:06.638378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.720 [2024-12-06 13:26:06.639297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.720 [2024-12-06 13:26:06.639351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.720 [2024-12-06 13:26:06.639374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:31:19.720 [2024-12-06 13:26:06.639393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.720 [2024-12-06 13:26:06.725740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.720 [2024-12-06 13:26:06.725837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:19.720 [2024-12-06 13:26:06.725886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.269 ms 00:31:19.720 [2024-12-06 13:26:06.725902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.979 [2024-12-06 13:26:06.758552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.979 [2024-12-06 13:26:06.758606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:19.979 [2024-12-06 13:26:06.758633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.522 ms 00:31:19.979 [2024-12-06 13:26:06.758650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.979 [2024-12-06 13:26:06.788371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.979 [2024-12-06 13:26:06.788449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:19.979 [2024-12-06 13:26:06.788490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.659 ms 00:31:19.979 [2024-12-06 13:26:06.788505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.979 [2024-12-06 13:26:06.818204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.979 [2024-12-06 13:26:06.818269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:19.979 [2024-12-06 13:26:06.818337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.623 ms 00:31:19.979 [2024-12-06 13:26:06.818354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.979 [2024-12-06 13:26:06.818420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.979 [2024-12-06 13:26:06.818443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:19.979 [2024-12-06 13:26:06.818467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:19.979 [2024-12-06 13:26:06.818482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.979 [2024-12-06 13:26:06.818667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.979 [2024-12-06 13:26:06.818706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:19.979 [2024-12-06 13:26:06.818728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:19.979 [2024-12-06 13:26:06.818742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.979 [2024-12-06 13:26:06.820183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3082.244 ms, result 0 00:31:19.979 { 00:31:19.979 "name": "ftl0", 00:31:19.979 "uuid": "104883f5-0a7e-4c98-bc4c-b46162b2b89d" 00:31:19.979 } 00:31:19.979 13:26:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:31:19.979 13:26:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:20.236 13:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:31:20.236 13:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:31:20.236 13:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:31:20.494 /dev/nbd0 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:31:20.494 1+0 records in 00:31:20.494 1+0 records out 00:31:20.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237844 s, 17.2 MB/s 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:31:20.494 13:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:31:20.751 [2024-12-06 13:26:07.583588] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:31:20.751 [2024-12-06 13:26:07.583748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81429 ] 00:31:20.751 [2024-12-06 13:26:07.758324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.010 [2024-12-06 13:26:07.883826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.390  [2024-12-06T13:26:10.340Z] Copying: 167/1024 [MB] (167 MBps) [2024-12-06T13:26:11.275Z] Copying: 336/1024 [MB] (168 MBps) [2024-12-06T13:26:12.247Z] Copying: 494/1024 [MB] (157 MBps) [2024-12-06T13:26:13.624Z] Copying: 656/1024 [MB] (162 MBps) [2024-12-06T13:26:14.558Z] Copying: 822/1024 [MB] (166 MBps) [2024-12-06T13:26:14.558Z] Copying: 981/1024 [MB] (159 MBps) [2024-12-06T13:26:15.933Z] Copying: 1024/1024 [MB] (average 163 MBps) 00:31:28.917 00:31:28.917 13:26:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:30.817 13:26:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:31:30.817 [2024-12-06 13:26:17.808020] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:31:30.817 [2024-12-06 13:26:17.808247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81530 ] 00:31:31.076 [2024-12-06 13:26:17.997172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.334 [2024-12-06 13:26:18.148233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.713  [2024-12-06T13:26:20.665Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-06T13:26:21.604Z] Copying: 25/1024 [MB] (12 MBps) [2024-12-06T13:26:22.539Z] Copying: 38/1024 [MB] (13 MBps) [2024-12-06T13:26:23.917Z] Copying: 50/1024 [MB] (11 MBps) [2024-12-06T13:26:24.482Z] Copying: 62/1024 [MB] (12 MBps) [2024-12-06T13:26:25.853Z] Copying: 75/1024 [MB] (12 MBps) [2024-12-06T13:26:26.786Z] Copying: 89/1024 [MB] (13 MBps) [2024-12-06T13:26:27.719Z] Copying: 104/1024 [MB] (15 MBps) [2024-12-06T13:26:28.654Z] Copying: 117/1024 [MB] (13 MBps) [2024-12-06T13:26:29.588Z] Copying: 131/1024 [MB] (13 MBps) [2024-12-06T13:26:30.522Z] Copying: 144/1024 [MB] (13 MBps) [2024-12-06T13:26:31.894Z] Copying: 158/1024 [MB] (13 MBps) [2024-12-06T13:26:32.830Z] Copying: 172/1024 [MB] (14 MBps) [2024-12-06T13:26:33.764Z] Copying: 187/1024 [MB] (14 MBps) [2024-12-06T13:26:34.699Z] Copying: 202/1024 [MB] (14 MBps) [2024-12-06T13:26:35.635Z] Copying: 216/1024 [MB] (14 MBps) [2024-12-06T13:26:36.571Z] Copying: 229/1024 [MB] (12 MBps) [2024-12-06T13:26:37.504Z] Copying: 241/1024 [MB] (12 MBps) [2024-12-06T13:26:38.880Z] Copying: 254/1024 [MB] (12 MBps) [2024-12-06T13:26:39.816Z] Copying: 267/1024 [MB] (13 MBps) [2024-12-06T13:26:40.752Z] Copying: 282/1024 [MB] (14 MBps) [2024-12-06T13:26:41.691Z] Copying: 295/1024 [MB] (13 MBps) [2024-12-06T13:26:42.629Z] Copying: 307/1024 [MB] (12 MBps) [2024-12-06T13:26:43.567Z] Copying: 321/1024 [MB] (13 MBps) [2024-12-06T13:26:44.503Z] Copying: 333/1024 [MB] (12 MBps) [2024-12-06T13:26:45.877Z] Copying: 346/1024 [MB] (12 MBps) [2024-12-06T13:26:46.839Z] Copying: 360/1024 [MB] (13 MBps) [2024-12-06T13:26:47.771Z] Copying: 373/1024 [MB] (13 MBps) [2024-12-06T13:26:48.706Z] Copying: 386/1024 [MB] (13 MBps) [2024-12-06T13:26:49.648Z] Copying: 399/1024 [MB] (12 MBps) [2024-12-06T13:26:50.580Z] Copying: 412/1024 [MB] (12 MBps) [2024-12-06T13:26:51.518Z] Copying: 425/1024 [MB] (12 MBps) [2024-12-06T13:26:52.892Z] Copying: 437/1024 [MB] (12 MBps) [2024-12-06T13:26:53.826Z] Copying: 450/1024 [MB] (12 MBps) [2024-12-06T13:26:54.759Z] Copying: 463/1024 [MB] (12 MBps) [2024-12-06T13:26:55.692Z] Copying: 476/1024 [MB] (12 MBps) [2024-12-06T13:26:56.625Z] Copying: 489/1024 [MB] (12 MBps) [2024-12-06T13:26:57.590Z] Copying: 502/1024 [MB] (13 MBps) [2024-12-06T13:26:58.526Z] Copying: 515/1024 [MB] (13 MBps) [2024-12-06T13:26:59.904Z] Copying: 530/1024 [MB] (15 MBps) [2024-12-06T13:27:00.838Z] Copying: 545/1024 [MB] (14 MBps) [2024-12-06T13:27:01.781Z] Copying: 558/1024 [MB] (13 MBps) [2024-12-06T13:27:02.716Z] Copying: 571/1024 [MB] (12 MBps) [2024-12-06T13:27:03.652Z] Copying: 585/1024 [MB] (13 MBps) [2024-12-06T13:27:04.589Z] Copying: 599/1024 [MB] (14 MBps) [2024-12-06T13:27:05.526Z] Copying: 613/1024 [MB] (13 MBps) [2024-12-06T13:27:06.897Z] Copying: 627/1024 [MB] (13 MBps) [2024-12-06T13:27:07.830Z] Copying: 639/1024 [MB] (12 MBps) [2024-12-06T13:27:08.762Z] Copying: 653/1024 [MB] (13 MBps) [2024-12-06T13:27:09.695Z] Copying: 667/1024 [MB] (13 MBps) [2024-12-06T13:27:10.631Z] Copying: 680/1024 [MB] (13 MBps) [2024-12-06T13:27:11.566Z] Copying: 694/1024 [MB] (13 MBps) [2024-12-06T13:27:12.498Z] Copying: 706/1024 [MB] (12 MBps) [2024-12-06T13:27:13.877Z] Copying: 719/1024 [MB] (13 MBps) [2024-12-06T13:27:14.813Z] Copying: 732/1024 [MB] (12 MBps) [2024-12-06T13:27:15.755Z] Copying: 746/1024 [MB] (13 MBps) [2024-12-06T13:27:16.691Z] Copying: 761/1024 [MB] (14 MBps) [2024-12-06T13:27:17.645Z] Copying: 775/1024 [MB] (14 MBps) [2024-12-06T13:27:18.580Z] Copying: 789/1024 [MB] (14 MBps) [2024-12-06T13:27:19.517Z] Copying: 804/1024 [MB] (14 MBps) [2024-12-06T13:27:20.892Z] Copying: 818/1024 [MB] (14 MBps) [2024-12-06T13:27:21.826Z] Copying: 833/1024 [MB] (14 MBps) [2024-12-06T13:27:22.763Z] Copying: 847/1024 [MB] (13 MBps) [2024-12-06T13:27:23.698Z] Copying: 860/1024 [MB] (13 MBps) [2024-12-06T13:27:24.632Z] Copying: 873/1024 [MB] (13 MBps) [2024-12-06T13:27:25.565Z] Copying: 888/1024 [MB] (14 MBps) [2024-12-06T13:27:26.498Z] Copying: 902/1024 [MB] (14 MBps) [2024-12-06T13:27:27.874Z] Copying: 918/1024 [MB] (15 MBps) [2024-12-06T13:27:28.811Z] Copying: 934/1024 [MB] (15 MBps) [2024-12-06T13:27:29.748Z] Copying: 949/1024 [MB] (15 MBps) [2024-12-06T13:27:30.683Z] Copying: 965/1024 [MB] (15 MBps) [2024-12-06T13:27:31.618Z] Copying: 979/1024 [MB] (14 MBps) [2024-12-06T13:27:32.552Z] Copying: 995/1024 [MB] (15 MBps) [2024-12-06T13:27:33.487Z] Copying: 1011/1024 [MB] (15 MBps) [2024-12-06T13:27:34.423Z] Copying: 1024/1024 [MB] (average 13 MBps) 00:32:47.407 00:32:47.407 13:27:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:32:47.407 13:27:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:32:47.665 13:27:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:47.924 [2024-12-06 13:27:34.898086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.924 [2024-12-06 13:27:34.898173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:47.924 [2024-12-06 13:27:34.898198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:47.924 [2024-12-06 13:27:34.898214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.924 [2024-12-06 13:27:34.898256] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:47.924 [2024-12-06 13:27:34.901969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.924 [2024-12-06 13:27:34.902011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:47.924 [2024-12-06 13:27:34.902031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:32:47.924 [2024-12-06 13:27:34.902044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.924 [2024-12-06 13:27:34.904045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.924 [2024-12-06 13:27:34.904088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:47.924 [2024-12-06 13:27:34.904109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.959 ms 00:32:47.925 [2024-12-06 13:27:34.904122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.925 [2024-12-06 13:27:34.921421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.925 [2024-12-06 13:27:34.921471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:47.925 [2024-12-06 13:27:34.921500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.245 ms 00:32:47.925 [2024-12-06 13:27:34.921513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.925 [2024-12-06 13:27:34.928051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.925 [2024-12-06 13:27:34.928095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:47.925 [2024-12-06 13:27:34.928113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.487 ms 00:32:47.925 [2024-12-06 13:27:34.928135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:34.959680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.185 [2024-12-06 13:27:34.959727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:48.185 [2024-12-06 13:27:34.959748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.433 ms 00:32:48.185 [2024-12-06 13:27:34.959761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:34.978670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.185 [2024-12-06 13:27:34.978720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:48.185 [2024-12-06 13:27:34.978746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.849 ms 00:32:48.185 [2024-12-06 13:27:34.978759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:34.978954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.185 [2024-12-06 13:27:34.978977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:48.185 [2024-12-06 13:27:34.978994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:32:48.185 [2024-12-06 13:27:34.979005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:35.009837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.185 [2024-12-06 13:27:35.009882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:48.185 [2024-12-06 13:27:35.009902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.799 ms 00:32:48.185 [2024-12-06 13:27:35.009914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:35.040151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.185 [2024-12-06 13:27:35.040208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:48.185 [2024-12-06 13:27:35.040231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.181 ms 00:32:48.185 [2024-12-06 13:27:35.040243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:35.070234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.185 [2024-12-06 13:27:35.070282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:48.185 [2024-12-06 13:27:35.070311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.931 ms 00:32:48.185 [2024-12-06 13:27:35.070324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:35.100247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.185 [2024-12-06 13:27:35.100299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:48.185 [2024-12-06 13:27:35.100321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.795 ms 00:32:48.185 [2024-12-06 13:27:35.100340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.185 [2024-12-06 13:27:35.100393] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:48.185 [2024-12-06 13:27:35.100417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.100993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:48.185 [2024-12-06 13:27:35.101301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:48.186 [2024-12-06 13:27:35.101906] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:48.186 [2024-12-06 13:27:35.101921] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 104883f5-0a7e-4c98-bc4c-b46162b2b89d 00:32:48.186 [2024-12-06 13:27:35.101933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:48.186 [2024-12-06 13:27:35.101949] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:48.186 [2024-12-06 13:27:35.101963] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:48.186 [2024-12-06 13:27:35.101977] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:48.186 [2024-12-06 13:27:35.101988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:48.186 [2024-12-06 13:27:35.102002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:48.186 [2024-12-06 13:27:35.102014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:48.186 [2024-12-06 13:27:35.102026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:48.186 [2024-12-06 13:27:35.102037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:48.186 [2024-12-06 13:27:35.102053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.186 [2024-12-06 13:27:35.102065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:48.186 [2024-12-06 13:27:35.102080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:32:48.186 [2024-12-06 13:27:35.102092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.186 [2024-12-06 13:27:35.119299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.186 [2024-12-06 13:27:35.119359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:48.186 [2024-12-06 13:27:35.119380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.112 ms 00:32:48.186 [2024-12-06 13:27:35.119392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.186 [2024-12-06 13:27:35.119875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.186 [2024-12-06 13:27:35.119899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:48.186 [2024-12-06 13:27:35.119917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:32:48.186 [2024-12-06 13:27:35.119929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.186 [2024-12-06 13:27:35.176453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.186 [2024-12-06 13:27:35.176513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:48.186 [2024-12-06 13:27:35.176534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.186 [2024-12-06 13:27:35.176547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.186 [2024-12-06 13:27:35.176632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.186 [2024-12-06 13:27:35.176648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:48.186 [2024-12-06 13:27:35.176663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.186 [2024-12-06 13:27:35.176675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.186 [2024-12-06 13:27:35.176818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.186 [2024-12-06 13:27:35.176851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:48.186 [2024-12-06 13:27:35.176867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.186 [2024-12-06 13:27:35.176879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.186 [2024-12-06 13:27:35.176914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.186 [2024-12-06 13:27:35.176929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:48.186 [2024-12-06 13:27:35.176944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.186 [2024-12-06 13:27:35.176955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.446 [2024-12-06 13:27:35.284357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.446 [2024-12-06 13:27:35.284436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:48.446 [2024-12-06 13:27:35.284459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.446 [2024-12-06 13:27:35.284472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.446 [2024-12-06 13:27:35.370558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.446 [2024-12-06 13:27:35.370626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:48.446 [2024-12-06 13:27:35.370653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.446 [2024-12-06 13:27:35.370666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.446 [2024-12-06 13:27:35.370809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.446 [2024-12-06 13:27:35.370829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:48.446 [2024-12-06 13:27:35.370850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.446 [2024-12-06 13:27:35.370861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.446 [2024-12-06 13:27:35.370942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.446 [2024-12-06 13:27:35.370962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:48.446 [2024-12-06 13:27:35.370978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.446 [2024-12-06 13:27:35.370990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.446 [2024-12-06 13:27:35.371149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.446 [2024-12-06 13:27:35.371178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:48.446 [2024-12-06 13:27:35.371197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.446 [2024-12-06 13:27:35.371212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.447 [2024-12-06 13:27:35.371276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.447 [2024-12-06 13:27:35.371294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:48.447 [2024-12-06 13:27:35.371310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.447 [2024-12-06 13:27:35.371322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.447 [2024-12-06 13:27:35.371376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.447 [2024-12-06 13:27:35.371392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:48.447 [2024-12-06 13:27:35.371407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.447 [2024-12-06 13:27:35.371421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.447 [2024-12-06 13:27:35.371486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:48.447 [2024-12-06 13:27:35.371519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:48.447 [2024-12-06 13:27:35.371536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:48.447 [2024-12-06 13:27:35.371548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.447 [2024-12-06 13:27:35.371751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 473.600 ms, result 0 00:32:48.447 true 00:32:48.447 13:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81281 00:32:48.447 13:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81281 00:32:48.447 13:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:32:48.708 [2024-12-06 13:27:35.494330] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:32:48.708 [2024-12-06 13:27:35.494492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82292 ] 00:32:48.708 [2024-12-06 13:27:35.672050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.966 [2024-12-06 13:27:35.802925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.343  [2024-12-06T13:27:38.293Z] Copying: 168/1024 [MB] (168 MBps) [2024-12-06T13:27:39.229Z] Copying: 335/1024 [MB] (166 MBps) [2024-12-06T13:27:40.162Z] Copying: 499/1024 [MB] (164 MBps) [2024-12-06T13:27:41.538Z] Copying: 666/1024 [MB] (167 MBps) [2024-12-06T13:27:42.515Z] Copying: 845/1024 [MB] (178 MBps) [2024-12-06T13:27:42.515Z] Copying: 1019/1024 [MB] (174 MBps) [2024-12-06T13:27:43.486Z] Copying: 1024/1024 [MB] (average 169 MBps) 00:32:56.470 00:32:56.470 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81281 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:32:56.470 13:27:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:56.470 [2024-12-06 13:27:43.330150] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:32:56.470 [2024-12-06 13:27:43.330420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82367 ] 00:32:56.729 [2024-12-06 13:27:43.515227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.729 [2024-12-06 13:27:43.649469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.296 [2024-12-06 13:27:44.021114] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:57.296 [2024-12-06 13:27:44.021259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:57.296 [2024-12-06 13:27:44.088793] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:57.296 [2024-12-06 13:27:44.089344] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:57.296 [2024-12-06 13:27:44.089709] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:57.555 [2024-12-06 13:27:44.384458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.384555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:57.555 [2024-12-06 13:27:44.384593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:57.555 [2024-12-06 13:27:44.384612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.384682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.384702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:57.555 [2024-12-06 13:27:44.384717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:57.555 [2024-12-06 13:27:44.384729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.384764] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:57.555 [2024-12-06 13:27:44.385709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:57.555 [2024-12-06 13:27:44.385761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.385776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:57.555 [2024-12-06 13:27:44.385790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:32:57.555 [2024-12-06 13:27:44.385802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.387930] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:57.555 [2024-12-06 13:27:44.404852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.404938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:57.555 [2024-12-06 13:27:44.404975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.923 ms 00:32:57.555 [2024-12-06 13:27:44.404988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.405063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.405084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:57.555 [2024-12-06 13:27:44.405097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:57.555 [2024-12-06 13:27:44.405110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.414975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.415037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:57.555 [2024-12-06 13:27:44.415071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.739 ms 00:32:57.555 [2024-12-06 13:27:44.415085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.415200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.415223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:57.555 [2024-12-06 13:27:44.415238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:32:57.555 [2024-12-06 13:27:44.415250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.415352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.415384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:57.555 [2024-12-06 13:27:44.415398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:57.555 [2024-12-06 13:27:44.415411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.415448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:57.555 [2024-12-06 13:27:44.420750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.420792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:57.555 [2024-12-06 13:27:44.420809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.311 ms 00:32:57.555 [2024-12-06 13:27:44.420823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.420868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.555 [2024-12-06 13:27:44.420897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:57.555 [2024-12-06 13:27:44.420911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:57.555 [2024-12-06 13:27:44.420924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.555 [2024-12-06 13:27:44.420981] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:57.555 [2024-12-06 13:27:44.421017] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:57.555 [2024-12-06 13:27:44.421066] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:57.555 [2024-12-06 13:27:44.421088] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:57.555 [2024-12-06 13:27:44.421227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:57.555 [2024-12-06 13:27:44.421261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:57.555 [2024-12-06 13:27:44.421288] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:57.555 [2024-12-06 13:27:44.421310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:57.555 [2024-12-06 13:27:44.421325] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:57.555 [2024-12-06 13:27:44.421339] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:57.556 [2024-12-06 13:27:44.421351] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:57.556 [2024-12-06 13:27:44.421375] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:57.556 [2024-12-06 13:27:44.421387] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:57.556 [2024-12-06 13:27:44.421401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.556 [2024-12-06 13:27:44.421414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:57.556 [2024-12-06 13:27:44.421427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:32:57.556 [2024-12-06 13:27:44.421439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.556 [2024-12-06 13:27:44.421539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.556 [2024-12-06 13:27:44.421561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:57.556 [2024-12-06 13:27:44.421574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:57.556 [2024-12-06 13:27:44.421587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.556 [2024-12-06 13:27:44.421709] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:57.556 [2024-12-06 13:27:44.421733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:57.556 [2024-12-06 13:27:44.421748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:57.556 [2024-12-06 13:27:44.421762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.556 [2024-12-06 13:27:44.421775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:57.556 [2024-12-06 13:27:44.421788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:57.556 [2024-12-06 13:27:44.421800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:57.556 [2024-12-06 13:27:44.421815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:57.556 [2024-12-06 13:27:44.421828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:57.556 [2024-12-06 13:27:44.421854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:57.556 [2024-12-06 13:27:44.421867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:57.556 [2024-12-06 13:27:44.421890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:57.556 [2024-12-06 13:27:44.421912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:57.556 [2024-12-06 13:27:44.421924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:57.556 [2024-12-06 13:27:44.421937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:57.556 [2024-12-06 13:27:44.421949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.556 [2024-12-06 13:27:44.421961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:57.556 [2024-12-06 13:27:44.421973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:57.556 [2024-12-06 13:27:44.421985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.556 [2024-12-06 13:27:44.421997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:57.556 [2024-12-06 13:27:44.422009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.556 [2024-12-06 13:27:44.422033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:57.556 [2024-12-06 13:27:44.422045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.556 [2024-12-06 13:27:44.422069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:57.556 [2024-12-06 13:27:44.422081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.556 [2024-12-06 13:27:44.422105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:57.556 [2024-12-06 13:27:44.422117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.556 [2024-12-06 13:27:44.422172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:57.556 [2024-12-06 13:27:44.422184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:57.556 [2024-12-06 13:27:44.422208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:57.556 [2024-12-06 13:27:44.422220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:57.556 [2024-12-06 13:27:44.422232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:57.556 [2024-12-06 13:27:44.422244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:57.556 [2024-12-06 13:27:44.422257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:57.556 [2024-12-06 13:27:44.422277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:57.556 [2024-12-06 13:27:44.422325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:57.556 [2024-12-06 13:27:44.422338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422351] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:57.556 [2024-12-06 13:27:44.422364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:57.556 [2024-12-06 13:27:44.422384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:57.556 [2024-12-06 13:27:44.422396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.556 [2024-12-06 13:27:44.422410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:57.556 [2024-12-06 13:27:44.422423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:57.556 [2024-12-06 13:27:44.422435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:57.556 [2024-12-06 13:27:44.422447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:57.556 [2024-12-06 13:27:44.422459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:57.556 [2024-12-06 13:27:44.422471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:57.556 [2024-12-06 13:27:44.422486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:57.556 [2024-12-06 13:27:44.422501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.556 [2024-12-06 13:27:44.422515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:57.556 [2024-12-06 13:27:44.422528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:57.556 [2024-12-06 13:27:44.422541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:57.556 [2024-12-06 13:27:44.422554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:57.556 [2024-12-06 13:27:44.422567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:57.556 [2024-12-06 13:27:44.422579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:57.556 [2024-12-06 13:27:44.422591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:57.556 [2024-12-06 13:27:44.422604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:57.556 [2024-12-06 13:27:44.422616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:57.556 [2024-12-06 13:27:44.422629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:57.556 [2024-12-06 13:27:44.422652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:57.556 [2024-12-06 13:27:44.422670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:57.556 [2024-12-06 13:27:44.422682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:57.556 [2024-12-06 13:27:44.422695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:57.556 [2024-12-06 13:27:44.422707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:57.556 [2024-12-06 13:27:44.422722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.556 [2024-12-06 13:27:44.422737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:57.556 [2024-12-06 13:27:44.422750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:57.556 [2024-12-06 13:27:44.422763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:57.556 [2024-12-06 13:27:44.422776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:57.556 [2024-12-06 13:27:44.422798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.556 [2024-12-06 13:27:44.422811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:57.556 [2024-12-06 13:27:44.422824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:32:57.556 [2024-12-06 13:27:44.422837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.556 [2024-12-06 13:27:44.465142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.556 [2024-12-06 13:27:44.465242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:57.556 [2024-12-06 13:27:44.465282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.205 ms 00:32:57.556 [2024-12-06 13:27:44.465296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.556 [2024-12-06 13:27:44.465443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.556 [2024-12-06 13:27:44.465462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:57.556 [2024-12-06 13:27:44.465478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:57.556 [2024-12-06 13:27:44.465491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.556 [2024-12-06 13:27:44.528496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.557 [2024-12-06 13:27:44.528586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:57.557 [2024-12-06 13:27:44.528615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.906 ms 00:32:57.557 [2024-12-06 13:27:44.528629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.557 [2024-12-06 13:27:44.528713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.557 [2024-12-06 13:27:44.528733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:57.557 [2024-12-06 13:27:44.528749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:57.557 [2024-12-06 13:27:44.528762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.557 [2024-12-06 13:27:44.529545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.557 [2024-12-06 13:27:44.529592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:57.557 [2024-12-06 13:27:44.529609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:32:57.557 [2024-12-06 13:27:44.529631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.557 [2024-12-06 13:27:44.529811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.557 [2024-12-06 13:27:44.529832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:57.557 [2024-12-06 13:27:44.529847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:32:57.557 [2024-12-06 13:27:44.529859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.557 [2024-12-06 13:27:44.550890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.557 [2024-12-06 13:27:44.550967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:57.557 [2024-12-06 13:27:44.550988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.979 ms 00:32:57.557 [2024-12-06 13:27:44.551001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.815 [2024-12-06 13:27:44.568947] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:57.815 [2024-12-06 13:27:44.569031] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:57.815 [2024-12-06 13:27:44.569071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.815 [2024-12-06 13:27:44.569086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:57.815 [2024-12-06 13:27:44.569102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.851 ms 00:32:57.815 [2024-12-06 13:27:44.569115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.815 [2024-12-06 13:27:44.599965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.815 [2024-12-06 13:27:44.600038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:57.816 [2024-12-06 13:27:44.600060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.779 ms 00:32:57.816 [2024-12-06 13:27:44.600075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.616624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.616696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:57.816 [2024-12-06 13:27:44.616718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.458 ms 00:32:57.816 [2024-12-06 13:27:44.616732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.632896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.632968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:57.816 [2024-12-06 13:27:44.633001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.097 ms 00:32:57.816 [2024-12-06 13:27:44.633014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.634033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.634087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:57.816 [2024-12-06 13:27:44.634120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:32:57.816 [2024-12-06 13:27:44.634149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.716094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.716197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:57.816 [2024-12-06 13:27:44.716249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.885 ms 00:32:57.816 [2024-12-06 13:27:44.716275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.731323] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:57.816 [2024-12-06 13:27:44.736112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.736193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:57.816 [2024-12-06 13:27:44.736227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.726 ms 00:32:57.816 [2024-12-06 13:27:44.736259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.736434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.736468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:57.816 [2024-12-06 13:27:44.736491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:57.816 [2024-12-06 13:27:44.736505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.736633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.736661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:57.816 [2024-12-06 13:27:44.736677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:57.816 [2024-12-06 13:27:44.736690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.736735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.736752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:57.816 [2024-12-06 13:27:44.736766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:57.816 [2024-12-06 13:27:44.736791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.736839] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:57.816 [2024-12-06 13:27:44.736856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.736869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:57.816 [2024-12-06 13:27:44.736893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:57.816 [2024-12-06 13:27:44.736921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.770494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.770581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:57.816 [2024-12-06 13:27:44.770605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.538 ms 00:32:57.816 [2024-12-06 13:27:44.770620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.770760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.816 [2024-12-06 13:27:44.770782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:57.816 [2024-12-06 13:27:44.770805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:57.816 [2024-12-06 13:27:44.770827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.816 [2024-12-06 13:27:44.772422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 387.323 ms, result 0 00:32:59.189  [2024-12-06T13:27:47.140Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T13:27:48.074Z] Copying: 47/1024 [MB] (24 MBps) [2024-12-06T13:27:49.008Z] Copying: 70/1024 [MB] (22 MBps) [2024-12-06T13:27:49.940Z] Copying: 95/1024 [MB] (25 MBps) [2024-12-06T13:27:50.906Z] Copying: 120/1024 [MB] (25 MBps) [2024-12-06T13:27:51.842Z] Copying: 145/1024 [MB] (24 MBps) [2024-12-06T13:27:53.217Z] Copying: 170/1024 [MB] (25 MBps) [2024-12-06T13:27:53.786Z] Copying: 196/1024 [MB] (25 MBps) [2024-12-06T13:27:55.161Z] Copying: 221/1024 [MB] (25 MBps) [2024-12-06T13:27:56.094Z] Copying: 246/1024 [MB] (25 MBps) [2024-12-06T13:27:57.030Z] Copying: 272/1024 [MB] (25 MBps) [2024-12-06T13:27:57.966Z] Copying: 298/1024 [MB] (26 MBps) [2024-12-06T13:27:58.896Z] Copying: 323/1024 [MB] (25 MBps) [2024-12-06T13:27:59.828Z] Copying: 346/1024 [MB] (22 MBps) [2024-12-06T13:28:01.201Z] Copying: 370/1024 [MB] (24 MBps) [2024-12-06T13:28:02.133Z] Copying: 395/1024 [MB] (24 MBps) [2024-12-06T13:28:03.068Z] Copying: 419/1024 [MB] (24 MBps) [2024-12-06T13:28:04.001Z] Copying: 444/1024 [MB] (25 MBps) [2024-12-06T13:28:04.949Z] Copying: 469/1024 [MB] (24 MBps) [2024-12-06T13:28:05.882Z] Copying: 493/1024 [MB] (24 MBps) [2024-12-06T13:28:06.840Z] Copying: 517/1024 [MB] (24 MBps) [2024-12-06T13:28:08.216Z] Copying: 542/1024 [MB] (24 MBps) [2024-12-06T13:28:09.152Z] Copying: 566/1024 [MB] (24 MBps) [2024-12-06T13:28:10.086Z] Copying: 592/1024 [MB] (25 MBps) [2024-12-06T13:28:11.039Z] Copying: 617/1024 [MB] (25 MBps) [2024-12-06T13:28:11.973Z] Copying: 643/1024 [MB] (25 MBps) [2024-12-06T13:28:12.907Z] Copying: 666/1024 [MB] (23 MBps) [2024-12-06T13:28:13.841Z] Copying: 692/1024 [MB] (25 MBps) [2024-12-06T13:28:15.215Z] Copying: 716/1024 [MB] (24 MBps) [2024-12-06T13:28:16.147Z] Copying: 742/1024 [MB] (25 MBps) [2024-12-06T13:28:17.080Z] Copying: 767/1024 [MB] (25 MBps) [2024-12-06T13:28:18.015Z] Copying: 792/1024 [MB] (25 MBps) [2024-12-06T13:28:18.948Z] Copying: 819/1024 [MB] (26 MBps) [2024-12-06T13:28:19.877Z] Copying: 845/1024 [MB] (26 MBps) [2024-12-06T13:28:20.838Z] Copying: 871/1024 [MB] (25 MBps) [2024-12-06T13:28:22.211Z] Copying: 896/1024 [MB] (25 MBps) [2024-12-06T13:28:23.145Z] Copying: 922/1024 [MB] (26 MBps) [2024-12-06T13:28:24.080Z] Copying: 948/1024 [MB] (26 MBps) [2024-12-06T13:28:25.016Z] Copying: 975/1024 [MB] (26 MBps) [2024-12-06T13:28:25.947Z] Copying: 1002/1024 [MB] (27 MBps) [2024-12-06T13:28:26.881Z] Copying: 1023/1024 [MB] (20 MBps) [2024-12-06T13:28:26.881Z] Copying: 1048576/1048576 [kB] (796 kBps) [2024-12-06T13:28:26.881Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-06 13:28:26.803198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.865 [2024-12-06 13:28:26.803446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:39.865 [2024-12-06 13:28:26.803482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:39.865 [2024-12-06 13:28:26.803497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.865 [2024-12-06 13:28:26.806607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:39.865 [2024-12-06 13:28:26.811383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.865 [2024-12-06 13:28:26.811429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:39.865 [2024-12-06 13:28:26.811449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:33:39.865 [2024-12-06 13:28:26.811473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.865 [2024-12-06 13:28:26.824644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.865 [2024-12-06 13:28:26.824704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:39.865 [2024-12-06 13:28:26.824726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.925 ms 00:33:39.865 [2024-12-06 13:28:26.824740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.865 [2024-12-06 13:28:26.848057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.865 [2024-12-06 13:28:26.848148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:39.865 [2024-12-06 13:28:26.848172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.289 ms 00:33:39.865 [2024-12-06 13:28:26.848186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.865 [2024-12-06 13:28:26.855062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.865 [2024-12-06 13:28:26.855106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:39.865 [2024-12-06 13:28:26.855133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.818 ms 00:33:39.865 [2024-12-06 13:28:26.855149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.125 [2024-12-06 13:28:26.887640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.125 [2024-12-06 13:28:26.887704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:40.125 [2024-12-06 13:28:26.887726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.395 ms 00:33:40.125 [2024-12-06 13:28:26.887740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.125 [2024-12-06 13:28:26.912661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.125 [2024-12-06 13:28:26.912723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:40.125 [2024-12-06 13:28:26.912755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.865 ms 00:33:40.125 [2024-12-06 13:28:26.912769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.125 [2024-12-06 13:28:27.015770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.125 [2024-12-06 13:28:27.015859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:40.125 [2024-12-06 13:28:27.015908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.937 ms 00:33:40.125 [2024-12-06 13:28:27.015922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.125 [2024-12-06 13:28:27.050372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.125 [2024-12-06 13:28:27.050437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:40.125 [2024-12-06 13:28:27.050458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.421 ms 00:33:40.125 [2024-12-06 13:28:27.050490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.125 [2024-12-06 13:28:27.081963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.125 [2024-12-06 13:28:27.082023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:40.125 [2024-12-06 13:28:27.082044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.417 ms 00:33:40.125 [2024-12-06 13:28:27.082058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.125 [2024-12-06 13:28:27.113756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.125 [2024-12-06 13:28:27.113871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:40.125 [2024-12-06 13:28:27.113892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.646 ms 00:33:40.125 [2024-12-06 13:28:27.113906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.386 [2024-12-06 13:28:27.145640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.386 [2024-12-06 13:28:27.145705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:40.386 [2024-12-06 13:28:27.145725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.634 ms 00:33:40.386 [2024-12-06 13:28:27.145739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.386 [2024-12-06 13:28:27.145787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:40.386 [2024-12-06 13:28:27.145860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:33:40.386 [2024-12-06 13:28:27.145901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.145914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.145944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.145958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.145971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.145988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:40.386 [2024-12-06 13:28:27.146613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.146987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:40.387 [2024-12-06 13:28:27.147341] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:40.387 [2024-12-06 13:28:27.147354] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 104883f5-0a7e-4c98-bc4c-b46162b2b89d 00:33:40.387 [2024-12-06 13:28:27.147398] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:33:40.387 [2024-12-06 13:28:27.147411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128704 00:33:40.387 [2024-12-06 13:28:27.147424] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:33:40.387 [2024-12-06 13:28:27.147438] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:33:40.387 [2024-12-06 13:28:27.147450] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:40.387 [2024-12-06 13:28:27.147463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:40.387 [2024-12-06 13:28:27.147475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:40.387 [2024-12-06 13:28:27.147487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:40.387 [2024-12-06 13:28:27.147497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:40.387 [2024-12-06 13:28:27.147510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.387 [2024-12-06 13:28:27.147523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:40.387 [2024-12-06 13:28:27.147536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:33:40.387 [2024-12-06 13:28:27.147550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.387 [2024-12-06 13:28:27.165383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.387 [2024-12-06 13:28:27.165446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:40.387 [2024-12-06 13:28:27.165466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.788 ms 00:33:40.387 [2024-12-06 13:28:27.165479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.387 [2024-12-06 13:28:27.165968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.387 [2024-12-06 13:28:27.165998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:40.387 [2024-12-06 13:28:27.166023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:33:40.387 [2024-12-06 13:28:27.166036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.387 [2024-12-06 13:28:27.213338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.387 [2024-12-06 13:28:27.213420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:40.387 [2024-12-06 13:28:27.213472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.387 [2024-12-06 13:28:27.213486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.387 [2024-12-06 13:28:27.213588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.387 [2024-12-06 13:28:27.213605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:40.387 [2024-12-06 13:28:27.213627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.387 [2024-12-06 13:28:27.213640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.387 [2024-12-06 13:28:27.213780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.387 [2024-12-06 13:28:27.213802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:40.387 [2024-12-06 13:28:27.213817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.387 [2024-12-06 13:28:27.213830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.387 [2024-12-06 13:28:27.213855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.387 [2024-12-06 13:28:27.213871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:40.387 [2024-12-06 13:28:27.213886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.387 [2024-12-06 13:28:27.213898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.387 [2024-12-06 13:28:27.332048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.387 [2024-12-06 13:28:27.332149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:40.387 [2024-12-06 13:28:27.332171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.387 [2024-12-06 13:28:27.332185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.646 [2024-12-06 13:28:27.422406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.646 [2024-12-06 13:28:27.422487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:40.646 [2024-12-06 13:28:27.422509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.646 [2024-12-06 13:28:27.422534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.646 [2024-12-06 13:28:27.422647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.646 [2024-12-06 13:28:27.422679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:40.646 [2024-12-06 13:28:27.422694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.646 [2024-12-06 13:28:27.422707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.646 [2024-12-06 13:28:27.422760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.646 [2024-12-06 13:28:27.422778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:40.646 [2024-12-06 13:28:27.422792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.646 [2024-12-06 13:28:27.422805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.646 [2024-12-06 13:28:27.422941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.646 [2024-12-06 13:28:27.422961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:40.646 [2024-12-06 13:28:27.422976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.646 [2024-12-06 13:28:27.422990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.646 [2024-12-06 13:28:27.423044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.646 [2024-12-06 13:28:27.423084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:40.646 [2024-12-06 13:28:27.423100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.646 [2024-12-06 13:28:27.423113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.646 [2024-12-06 13:28:27.423194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.646 [2024-12-06 13:28:27.423214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:40.646 [2024-12-06 13:28:27.423229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.647 [2024-12-06 13:28:27.423241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.647 [2024-12-06 13:28:27.423296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.647 [2024-12-06 13:28:27.423315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:40.647 [2024-12-06 13:28:27.423329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.647 [2024-12-06 13:28:27.423342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.647 [2024-12-06 13:28:27.423511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 623.646 ms, result 0 00:33:42.548 00:33:42.548 00:33:42.548 13:28:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:44.457 13:28:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:44.457 [2024-12-06 13:28:31.431267] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:33:44.457 [2024-12-06 13:28:31.431434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82833 ] 00:33:44.714 [2024-12-06 13:28:31.614391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.973 [2024-12-06 13:28:31.774176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.232 [2024-12-06 13:28:32.185407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:45.232 [2024-12-06 13:28:32.185497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:45.492 [2024-12-06 13:28:32.350082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.350204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:45.492 [2024-12-06 13:28:32.350227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:45.492 [2024-12-06 13:28:32.350241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.350325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.350349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:45.492 [2024-12-06 13:28:32.350364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:45.492 [2024-12-06 13:28:32.350375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.350408] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:45.492 [2024-12-06 13:28:32.351368] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:45.492 [2024-12-06 13:28:32.351410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.351425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:45.492 [2024-12-06 13:28:32.351439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:33:45.492 [2024-12-06 13:28:32.351451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.353481] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:45.492 [2024-12-06 13:28:32.370562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.370618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:45.492 [2024-12-06 13:28:32.370638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.083 ms 00:33:45.492 [2024-12-06 13:28:32.370651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.370734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.370755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:45.492 [2024-12-06 13:28:32.370769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:45.492 [2024-12-06 13:28:32.370781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.379731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.379794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:45.492 [2024-12-06 13:28:32.379827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.821 ms 00:33:45.492 [2024-12-06 13:28:32.379847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.379950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.379969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:45.492 [2024-12-06 13:28:32.379983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:33:45.492 [2024-12-06 13:28:32.379995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.380058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.380076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:45.492 [2024-12-06 13:28:32.380089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:45.492 [2024-12-06 13:28:32.380102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.380161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:45.492 [2024-12-06 13:28:32.385239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.385283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:45.492 [2024-12-06 13:28:32.385306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.103 ms 00:33:45.492 [2024-12-06 13:28:32.385318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.385363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.385380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:45.492 [2024-12-06 13:28:32.385393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:45.492 [2024-12-06 13:28:32.385404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.385478] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:45.492 [2024-12-06 13:28:32.385520] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:45.492 [2024-12-06 13:28:32.385564] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:45.492 [2024-12-06 13:28:32.385591] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:45.492 [2024-12-06 13:28:32.385704] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:45.492 [2024-12-06 13:28:32.385726] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:45.492 [2024-12-06 13:28:32.385743] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:45.492 [2024-12-06 13:28:32.385758] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:45.492 [2024-12-06 13:28:32.385772] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:45.492 [2024-12-06 13:28:32.385786] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:45.492 [2024-12-06 13:28:32.385798] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:45.492 [2024-12-06 13:28:32.385815] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:45.492 [2024-12-06 13:28:32.385827] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:45.492 [2024-12-06 13:28:32.385839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.385851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:45.492 [2024-12-06 13:28:32.385865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:33:45.492 [2024-12-06 13:28:32.385876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.385976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.492 [2024-12-06 13:28:32.385992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:45.492 [2024-12-06 13:28:32.386006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:45.492 [2024-12-06 13:28:32.386017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.492 [2024-12-06 13:28:32.386321] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:45.492 [2024-12-06 13:28:32.386358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:45.492 [2024-12-06 13:28:32.386373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:45.492 [2024-12-06 13:28:32.386386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.492 [2024-12-06 13:28:32.386399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:45.492 [2024-12-06 13:28:32.386410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:45.492 [2024-12-06 13:28:32.386421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:45.492 [2024-12-06 13:28:32.386432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:45.492 [2024-12-06 13:28:32.386442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:45.492 [2024-12-06 13:28:32.386453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:45.492 [2024-12-06 13:28:32.386465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:45.492 [2024-12-06 13:28:32.386475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:45.492 [2024-12-06 13:28:32.386486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:45.492 [2024-12-06 13:28:32.386512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:45.492 [2024-12-06 13:28:32.386524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:45.492 [2024-12-06 13:28:32.386535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.492 [2024-12-06 13:28:32.386546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:45.492 [2024-12-06 13:28:32.386556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:45.493 [2024-12-06 13:28:32.386574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:45.493 [2024-12-06 13:28:32.386597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.493 [2024-12-06 13:28:32.386620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:45.493 [2024-12-06 13:28:32.386632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.493 [2024-12-06 13:28:32.386654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:45.493 [2024-12-06 13:28:32.386665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.493 [2024-12-06 13:28:32.386687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:45.493 [2024-12-06 13:28:32.386699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.493 [2024-12-06 13:28:32.386721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:45.493 [2024-12-06 13:28:32.386732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:45.493 [2024-12-06 13:28:32.386753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:45.493 [2024-12-06 13:28:32.386764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:45.493 [2024-12-06 13:28:32.386775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:45.493 [2024-12-06 13:28:32.386786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:45.493 [2024-12-06 13:28:32.386797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:45.493 [2024-12-06 13:28:32.386808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:45.493 [2024-12-06 13:28:32.386830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:45.493 [2024-12-06 13:28:32.386841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386852] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:45.493 [2024-12-06 13:28:32.386864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:45.493 [2024-12-06 13:28:32.386877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:45.493 [2024-12-06 13:28:32.386889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.493 [2024-12-06 13:28:32.386901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:45.493 [2024-12-06 13:28:32.386912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:45.493 [2024-12-06 13:28:32.386923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:45.493 [2024-12-06 13:28:32.386934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:45.493 [2024-12-06 13:28:32.386945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:45.493 [2024-12-06 13:28:32.386956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:45.493 [2024-12-06 13:28:32.386972] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:45.493 [2024-12-06 13:28:32.386987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:45.493 [2024-12-06 13:28:32.387006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:45.493 [2024-12-06 13:28:32.387019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:45.493 [2024-12-06 13:28:32.387031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:45.493 [2024-12-06 13:28:32.387042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:45.493 [2024-12-06 13:28:32.387054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:45.493 [2024-12-06 13:28:32.387066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:45.493 [2024-12-06 13:28:32.387078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:45.493 [2024-12-06 13:28:32.387091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:45.493 [2024-12-06 13:28:32.387102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:45.493 [2024-12-06 13:28:32.387114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:45.493 [2024-12-06 13:28:32.387141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:45.493 [2024-12-06 13:28:32.387156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:45.493 [2024-12-06 13:28:32.387168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:45.493 [2024-12-06 13:28:32.387180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:45.493 [2024-12-06 13:28:32.387192] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:45.493 [2024-12-06 13:28:32.387205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:45.493 [2024-12-06 13:28:32.387219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:45.493 [2024-12-06 13:28:32.387232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:45.493 [2024-12-06 13:28:32.387244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:45.493 [2024-12-06 13:28:32.387256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:45.493 [2024-12-06 13:28:32.387270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.493 [2024-12-06 13:28:32.387283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:45.493 [2024-12-06 13:28:32.387296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:33:45.493 [2024-12-06 13:28:32.387308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.493 [2024-12-06 13:28:32.440621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.493 [2024-12-06 13:28:32.440703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:45.493 [2024-12-06 13:28:32.440725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.236 ms 00:33:45.493 [2024-12-06 13:28:32.440745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.493 [2024-12-06 13:28:32.440869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.493 [2024-12-06 13:28:32.440886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:45.493 [2024-12-06 13:28:32.440901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:45.493 [2024-12-06 13:28:32.440913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.752 [2024-12-06 13:28:32.507653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.752 [2024-12-06 13:28:32.507731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:45.752 [2024-12-06 13:28:32.507754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.632 ms 00:33:45.752 [2024-12-06 13:28:32.507767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.752 [2024-12-06 13:28:32.507849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.752 [2024-12-06 13:28:32.507867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:45.752 [2024-12-06 13:28:32.507889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:45.752 [2024-12-06 13:28:32.507901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.752 [2024-12-06 13:28:32.508632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.508661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:45.753 [2024-12-06 13:28:32.508677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:33:45.753 [2024-12-06 13:28:32.508689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.508869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.508890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:45.753 [2024-12-06 13:28:32.508912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:33:45.753 [2024-12-06 13:28:32.508924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.529016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.529091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:45.753 [2024-12-06 13:28:32.529113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.059 ms 00:33:45.753 [2024-12-06 13:28:32.529138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.546895] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:45.753 [2024-12-06 13:28:32.546986] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:45.753 [2024-12-06 13:28:32.547011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.547024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:45.753 [2024-12-06 13:28:32.547041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.671 ms 00:33:45.753 [2024-12-06 13:28:32.547053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.576759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.576841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:45.753 [2024-12-06 13:28:32.576863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.600 ms 00:33:45.753 [2024-12-06 13:28:32.576876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.593804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.593877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:45.753 [2024-12-06 13:28:32.593900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.838 ms 00:33:45.753 [2024-12-06 13:28:32.593914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.609744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.609819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:45.753 [2024-12-06 13:28:32.609839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.761 ms 00:33:45.753 [2024-12-06 13:28:32.609852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.610882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.610919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:45.753 [2024-12-06 13:28:32.610942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:33:45.753 [2024-12-06 13:28:32.610955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.689377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.689462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:45.753 [2024-12-06 13:28:32.689493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.392 ms 00:33:45.753 [2024-12-06 13:28:32.689507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.703424] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:45.753 [2024-12-06 13:28:32.707556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.707600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:45.753 [2024-12-06 13:28:32.707621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.963 ms 00:33:45.753 [2024-12-06 13:28:32.707634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.707772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.707794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:45.753 [2024-12-06 13:28:32.707815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:45.753 [2024-12-06 13:28:32.707827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.709841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.709879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:45.753 [2024-12-06 13:28:32.709895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:33:45.753 [2024-12-06 13:28:32.709907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.709948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.709965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:45.753 [2024-12-06 13:28:32.709978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:45.753 [2024-12-06 13:28:32.709991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.710079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:45.753 [2024-12-06 13:28:32.710097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.710110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:45.753 [2024-12-06 13:28:32.710123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:45.753 [2024-12-06 13:28:32.710151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.741835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.741912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:45.753 [2024-12-06 13:28:32.741944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.647 ms 00:33:45.753 [2024-12-06 13:28:32.741957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.742065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.753 [2024-12-06 13:28:32.742085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:45.753 [2024-12-06 13:28:32.742099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:45.753 [2024-12-06 13:28:32.742112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.753 [2024-12-06 13:28:32.747342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.244 ms, result 0 00:33:47.129  [2024-12-06T13:28:35.081Z] Copying: 900/1048576 [kB] (900 kBps) [2024-12-06T13:28:36.013Z] Copying: 4500/1048576 [kB] (3600 kBps) [2024-12-06T13:28:37.424Z] Copying: 26/1024 [MB] (21 MBps) [2024-12-06T13:28:37.989Z] Copying: 56/1024 [MB] (30 MBps) [2024-12-06T13:28:39.365Z] Copying: 88/1024 [MB] (31 MBps) [2024-12-06T13:28:40.300Z] Copying: 119/1024 [MB] (30 MBps) [2024-12-06T13:28:41.235Z] Copying: 149/1024 [MB] (30 MBps) [2024-12-06T13:28:42.245Z] Copying: 179/1024 [MB] (30 MBps) [2024-12-06T13:28:43.181Z] Copying: 208/1024 [MB] (29 MBps) [2024-12-06T13:28:44.116Z] Copying: 237/1024 [MB] (29 MBps) [2024-12-06T13:28:45.052Z] Copying: 267/1024 [MB] (29 MBps) [2024-12-06T13:28:45.988Z] Copying: 295/1024 [MB] (27 MBps) [2024-12-06T13:28:47.394Z] Copying: 323/1024 [MB] (28 MBps) [2024-12-06T13:28:48.328Z] Copying: 352/1024 [MB] (29 MBps) [2024-12-06T13:28:49.265Z] Copying: 382/1024 [MB] (29 MBps) [2024-12-06T13:28:50.202Z] Copying: 410/1024 [MB] (27 MBps) [2024-12-06T13:28:51.222Z] Copying: 438/1024 [MB] (28 MBps) [2024-12-06T13:28:52.159Z] Copying: 466/1024 [MB] (28 MBps) [2024-12-06T13:28:53.095Z] Copying: 495/1024 [MB] (28 MBps) [2024-12-06T13:28:54.032Z] Copying: 524/1024 [MB] (29 MBps) [2024-12-06T13:28:55.468Z] Copying: 553/1024 [MB] (28 MBps) [2024-12-06T13:28:56.034Z] Copying: 581/1024 [MB] (28 MBps) [2024-12-06T13:28:57.405Z] Copying: 609/1024 [MB] (27 MBps) [2024-12-06T13:28:58.340Z] Copying: 637/1024 [MB] (27 MBps) [2024-12-06T13:28:59.273Z] Copying: 665/1024 [MB] (27 MBps) [2024-12-06T13:29:00.207Z] Copying: 693/1024 [MB] (28 MBps) [2024-12-06T13:29:01.138Z] Copying: 720/1024 [MB] (26 MBps) [2024-12-06T13:29:02.070Z] Copying: 747/1024 [MB] (27 MBps) [2024-12-06T13:29:03.031Z] Copying: 775/1024 [MB] (28 MBps) [2024-12-06T13:29:04.406Z] Copying: 803/1024 [MB] (27 MBps) [2024-12-06T13:29:05.338Z] Copying: 831/1024 [MB] (28 MBps) [2024-12-06T13:29:06.269Z] Copying: 859/1024 [MB] (27 MBps) [2024-12-06T13:29:07.204Z] Copying: 887/1024 [MB] (28 MBps) [2024-12-06T13:29:08.139Z] Copying: 916/1024 [MB] (28 MBps) [2024-12-06T13:29:09.074Z] Copying: 944/1024 [MB] (28 MBps) [2024-12-06T13:29:10.010Z] Copying: 972/1024 [MB] (28 MBps) [2024-12-06T13:29:10.945Z] Copying: 1000/1024 [MB] (28 MBps) [2024-12-06T13:29:10.945Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 13:29:10.816196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.816260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:23.929 [2024-12-06 13:29:10.816283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:23.929 [2024-12-06 13:29:10.816297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.816329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:23.929 [2024-12-06 13:29:10.820011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.820066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:23.929 [2024-12-06 13:29:10.820083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:34:23.929 [2024-12-06 13:29:10.820095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.820362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.820398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:23.929 [2024-12-06 13:29:10.820413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:34:23.929 [2024-12-06 13:29:10.820425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.833010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.833061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:23.929 [2024-12-06 13:29:10.833081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.561 ms 00:34:23.929 [2024-12-06 13:29:10.833095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.840221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.840296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:23.929 [2024-12-06 13:29:10.840337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.077 ms 00:34:23.929 [2024-12-06 13:29:10.840350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.873819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.873863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:23.929 [2024-12-06 13:29:10.873881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.402 ms 00:34:23.929 [2024-12-06 13:29:10.873894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.892403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.892461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:23.929 [2024-12-06 13:29:10.892492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.465 ms 00:34:23.929 [2024-12-06 13:29:10.892503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.894609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.894655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:23.929 [2024-12-06 13:29:10.894673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:34:23.929 [2024-12-06 13:29:10.894694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.929 [2024-12-06 13:29:10.926589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.929 [2024-12-06 13:29:10.926649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:23.929 [2024-12-06 13:29:10.926668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.871 ms 00:34:23.929 [2024-12-06 13:29:10.926681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.188 [2024-12-06 13:29:10.958200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.188 [2024-12-06 13:29:10.958277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:24.188 [2024-12-06 13:29:10.958337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.475 ms 00:34:24.188 [2024-12-06 13:29:10.958352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.188 [2024-12-06 13:29:10.990077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.188 [2024-12-06 13:29:10.990157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:24.188 [2024-12-06 13:29:10.990174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.665 ms 00:34:24.188 [2024-12-06 13:29:10.990185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.188 [2024-12-06 13:29:11.021076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.188 [2024-12-06 13:29:11.021170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:24.188 [2024-12-06 13:29:11.021209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.801 ms 00:34:24.188 [2024-12-06 13:29:11.021220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.188 [2024-12-06 13:29:11.021261] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:24.188 [2024-12-06 13:29:11.021284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:24.188 [2024-12-06 13:29:11.021299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:34:24.188 [2024-12-06 13:29:11.021312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:24.188 [2024-12-06 13:29:11.021982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.021995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:24.189 [2024-12-06 13:29:11.022818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:24.189 [2024-12-06 13:29:11.022830] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 104883f5-0a7e-4c98-bc4c-b46162b2b89d 00:34:24.189 [2024-12-06 13:29:11.022842] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:34:24.189 [2024-12-06 13:29:11.022853] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136896 00:34:24.189 [2024-12-06 13:29:11.022870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134912 00:34:24.189 [2024-12-06 13:29:11.022881] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:34:24.189 [2024-12-06 13:29:11.022893] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:24.189 [2024-12-06 13:29:11.022918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:24.189 [2024-12-06 13:29:11.022930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:24.189 [2024-12-06 13:29:11.022955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:24.189 [2024-12-06 13:29:11.022966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:24.189 [2024-12-06 13:29:11.022978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.189 [2024-12-06 13:29:11.023004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:24.189 [2024-12-06 13:29:11.023017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:34:24.189 [2024-12-06 13:29:11.023029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.189 [2024-12-06 13:29:11.041192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.189 [2024-12-06 13:29:11.041240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:24.189 [2024-12-06 13:29:11.041259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.099 ms 00:34:24.189 [2024-12-06 13:29:11.041271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.189 [2024-12-06 13:29:11.041740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.189 [2024-12-06 13:29:11.041770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:24.189 [2024-12-06 13:29:11.041786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:34:24.189 [2024-12-06 13:29:11.041798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.189 [2024-12-06 13:29:11.090572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.189 [2024-12-06 13:29:11.090632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:24.189 [2024-12-06 13:29:11.090650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.189 [2024-12-06 13:29:11.090663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.189 [2024-12-06 13:29:11.090744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.189 [2024-12-06 13:29:11.090760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:24.189 [2024-12-06 13:29:11.090773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.189 [2024-12-06 13:29:11.090786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.189 [2024-12-06 13:29:11.090889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.189 [2024-12-06 13:29:11.090909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:24.189 [2024-12-06 13:29:11.090923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.189 [2024-12-06 13:29:11.090935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.189 [2024-12-06 13:29:11.090958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.189 [2024-12-06 13:29:11.090973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:24.189 [2024-12-06 13:29:11.091001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.189 [2024-12-06 13:29:11.091027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.447 [2024-12-06 13:29:11.209700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.209774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:24.448 [2024-12-06 13:29:11.209804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.209817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.305295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.305358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:24.448 [2024-12-06 13:29:11.305395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.305407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.305510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.305536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:24.448 [2024-12-06 13:29:11.305550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.305561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.305622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.305653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:24.448 [2024-12-06 13:29:11.305681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.305694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.305833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.305854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:24.448 [2024-12-06 13:29:11.305874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.305887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.305937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.305955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:24.448 [2024-12-06 13:29:11.305969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.305981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.306028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.306043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:24.448 [2024-12-06 13:29:11.306063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.306075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.306141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:24.448 [2024-12-06 13:29:11.306158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:24.448 [2024-12-06 13:29:11.306186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:24.448 [2024-12-06 13:29:11.306198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.448 [2024-12-06 13:29:11.306388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 490.143 ms, result 0 00:34:25.383 00:34:25.383 00:34:25.383 13:29:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:27.914 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:27.914 13:29:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:27.914 [2024-12-06 13:29:14.648461] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:34:27.914 [2024-12-06 13:29:14.648637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83249 ] 00:34:27.914 [2024-12-06 13:29:14.843923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.181 [2024-12-06 13:29:15.000597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.439 [2024-12-06 13:29:15.395888] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:28.439 [2024-12-06 13:29:15.396033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:28.697 [2024-12-06 13:29:15.563819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.697 [2024-12-06 13:29:15.563900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:28.697 [2024-12-06 13:29:15.563922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:28.697 [2024-12-06 13:29:15.563936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.697 [2024-12-06 13:29:15.564046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.697 [2024-12-06 13:29:15.564067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:28.697 [2024-12-06 13:29:15.564081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:28.697 [2024-12-06 13:29:15.564103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.564136] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:28.698 [2024-12-06 13:29:15.565120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:28.698 [2024-12-06 13:29:15.565161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.565175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:28.698 [2024-12-06 13:29:15.565220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:34:28.698 [2024-12-06 13:29:15.565232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.567286] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:28.698 [2024-12-06 13:29:15.586123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.586252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:28.698 [2024-12-06 13:29:15.586272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.839 ms 00:34:28.698 [2024-12-06 13:29:15.586285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.586388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.586408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:28.698 [2024-12-06 13:29:15.586422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:28.698 [2024-12-06 13:29:15.586434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.596398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.596458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:28.698 [2024-12-06 13:29:15.596486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.864 ms 00:34:28.698 [2024-12-06 13:29:15.596504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.596640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.596660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:28.698 [2024-12-06 13:29:15.596674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:34:28.698 [2024-12-06 13:29:15.596687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.596776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.596797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:28.698 [2024-12-06 13:29:15.596811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:28.698 [2024-12-06 13:29:15.596824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.596868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:28.698 [2024-12-06 13:29:15.602294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.602360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:28.698 [2024-12-06 13:29:15.602385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.438 ms 00:34:28.698 [2024-12-06 13:29:15.602398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.602443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.602461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:28.698 [2024-12-06 13:29:15.602475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:28.698 [2024-12-06 13:29:15.602487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.602534] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:28.698 [2024-12-06 13:29:15.602569] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:28.698 [2024-12-06 13:29:15.602613] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:28.698 [2024-12-06 13:29:15.602639] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:28.698 [2024-12-06 13:29:15.602748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:28.698 [2024-12-06 13:29:15.602764] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:28.698 [2024-12-06 13:29:15.602780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:28.698 [2024-12-06 13:29:15.602796] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:28.698 [2024-12-06 13:29:15.602810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:28.698 [2024-12-06 13:29:15.602824] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:28.698 [2024-12-06 13:29:15.602836] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:28.698 [2024-12-06 13:29:15.602852] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:28.698 [2024-12-06 13:29:15.602864] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:28.698 [2024-12-06 13:29:15.602879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.602891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:28.698 [2024-12-06 13:29:15.602904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:34:28.698 [2024-12-06 13:29:15.602917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.603028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.698 [2024-12-06 13:29:15.603045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:28.698 [2024-12-06 13:29:15.603058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:28.698 [2024-12-06 13:29:15.603069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.698 [2024-12-06 13:29:15.603224] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:28.698 [2024-12-06 13:29:15.603270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:28.698 [2024-12-06 13:29:15.603285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:28.698 [2024-12-06 13:29:15.603298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:28.698 [2024-12-06 13:29:15.603324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:28.698 [2024-12-06 13:29:15.603348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:28.698 [2024-12-06 13:29:15.603360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:28.698 [2024-12-06 13:29:15.603384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:28.698 [2024-12-06 13:29:15.603395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:28.698 [2024-12-06 13:29:15.603407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:28.698 [2024-12-06 13:29:15.603433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:28.698 [2024-12-06 13:29:15.603446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:28.698 [2024-12-06 13:29:15.603458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:28.698 [2024-12-06 13:29:15.603481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:28.698 [2024-12-06 13:29:15.603492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:28.698 [2024-12-06 13:29:15.603515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.698 [2024-12-06 13:29:15.603539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:28.698 [2024-12-06 13:29:15.603551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.698 [2024-12-06 13:29:15.603574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:28.698 [2024-12-06 13:29:15.603586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.698 [2024-12-06 13:29:15.603622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:28.698 [2024-12-06 13:29:15.603633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.698 [2024-12-06 13:29:15.603656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:28.698 [2024-12-06 13:29:15.603668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:28.698 [2024-12-06 13:29:15.603690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:28.698 [2024-12-06 13:29:15.603702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:28.698 [2024-12-06 13:29:15.603713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:28.698 [2024-12-06 13:29:15.603727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:28.698 [2024-12-06 13:29:15.603739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:28.698 [2024-12-06 13:29:15.603751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:28.698 [2024-12-06 13:29:15.603774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:28.698 [2024-12-06 13:29:15.603785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.698 [2024-12-06 13:29:15.603797] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:28.698 [2024-12-06 13:29:15.603809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:28.698 [2024-12-06 13:29:15.603821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:28.699 [2024-12-06 13:29:15.603833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.699 [2024-12-06 13:29:15.603846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:28.699 [2024-12-06 13:29:15.603857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:28.699 [2024-12-06 13:29:15.603869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:28.699 [2024-12-06 13:29:15.603881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:28.699 [2024-12-06 13:29:15.603892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:28.699 [2024-12-06 13:29:15.603904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:28.699 [2024-12-06 13:29:15.603918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:28.699 [2024-12-06 13:29:15.603933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:28.699 [2024-12-06 13:29:15.603952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:28.699 [2024-12-06 13:29:15.603965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:28.699 [2024-12-06 13:29:15.603978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:28.699 [2024-12-06 13:29:15.603991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:28.699 [2024-12-06 13:29:15.604004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:28.699 [2024-12-06 13:29:15.604017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:28.699 [2024-12-06 13:29:15.604044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:28.699 [2024-12-06 13:29:15.604056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:28.699 [2024-12-06 13:29:15.604068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:28.699 [2024-12-06 13:29:15.604080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:28.699 [2024-12-06 13:29:15.604103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:28.699 [2024-12-06 13:29:15.604115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:28.699 [2024-12-06 13:29:15.604128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:28.699 [2024-12-06 13:29:15.604140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:28.699 [2024-12-06 13:29:15.604164] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:28.699 [2024-12-06 13:29:15.604179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:28.699 [2024-12-06 13:29:15.604192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:28.699 [2024-12-06 13:29:15.604205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:28.699 [2024-12-06 13:29:15.604229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:28.699 [2024-12-06 13:29:15.604242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:28.699 [2024-12-06 13:29:15.604256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.699 [2024-12-06 13:29:15.604269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:28.699 [2024-12-06 13:29:15.604281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:34:28.699 [2024-12-06 13:29:15.604293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.699 [2024-12-06 13:29:15.646266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.699 [2024-12-06 13:29:15.646351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:28.699 [2024-12-06 13:29:15.646372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.897 ms 00:34:28.699 [2024-12-06 13:29:15.646392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.699 [2024-12-06 13:29:15.646517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.699 [2024-12-06 13:29:15.646535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:28.699 [2024-12-06 13:29:15.646550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:34:28.699 [2024-12-06 13:29:15.646562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.699 [2024-12-06 13:29:15.704645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.699 [2024-12-06 13:29:15.704711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:28.699 [2024-12-06 13:29:15.704758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.951 ms 00:34:28.699 [2024-12-06 13:29:15.704772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.699 [2024-12-06 13:29:15.704894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.699 [2024-12-06 13:29:15.704912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:28.699 [2024-12-06 13:29:15.704931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:28.699 [2024-12-06 13:29:15.704943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.699 [2024-12-06 13:29:15.705634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.699 [2024-12-06 13:29:15.705660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:28.699 [2024-12-06 13:29:15.705675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:34:28.699 [2024-12-06 13:29:15.705688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.699 [2024-12-06 13:29:15.705865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.699 [2024-12-06 13:29:15.705886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:28.699 [2024-12-06 13:29:15.705907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:34:28.699 [2024-12-06 13:29:15.705921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.956 [2024-12-06 13:29:15.726037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.956 [2024-12-06 13:29:15.726112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:28.956 [2024-12-06 13:29:15.726129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.087 ms 00:34:28.956 [2024-12-06 13:29:15.726151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.956 [2024-12-06 13:29:15.744068] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:28.956 [2024-12-06 13:29:15.744121] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:28.956 [2024-12-06 13:29:15.744163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.956 [2024-12-06 13:29:15.744178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:28.956 [2024-12-06 13:29:15.744191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.798 ms 00:34:28.956 [2024-12-06 13:29:15.744203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.956 [2024-12-06 13:29:15.771575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.956 [2024-12-06 13:29:15.771628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:28.956 [2024-12-06 13:29:15.771645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.293 ms 00:34:28.956 [2024-12-06 13:29:15.771656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.956 [2024-12-06 13:29:15.785925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.785977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:28.957 [2024-12-06 13:29:15.785993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.207 ms 00:34:28.957 [2024-12-06 13:29:15.786004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.802317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.802356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:28.957 [2024-12-06 13:29:15.802372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.259 ms 00:34:28.957 [2024-12-06 13:29:15.802385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.803276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.803309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:28.957 [2024-12-06 13:29:15.803330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:34:28.957 [2024-12-06 13:29:15.803343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.882204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.882280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:28.957 [2024-12-06 13:29:15.882306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.833 ms 00:34:28.957 [2024-12-06 13:29:15.882356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.894710] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:28.957 [2024-12-06 13:29:15.898548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.898582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:28.957 [2024-12-06 13:29:15.898601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.123 ms 00:34:28.957 [2024-12-06 13:29:15.898628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.898769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.898789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:28.957 [2024-12-06 13:29:15.898807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:28.957 [2024-12-06 13:29:15.898850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.899924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.899971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:28.957 [2024-12-06 13:29:15.899986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:34:28.957 [2024-12-06 13:29:15.899998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.900033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.900061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:28.957 [2024-12-06 13:29:15.900074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:28.957 [2024-12-06 13:29:15.900086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.900135] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:28.957 [2024-12-06 13:29:15.900182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.900194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:28.957 [2024-12-06 13:29:15.900224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:28.957 [2024-12-06 13:29:15.900237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.932423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.932496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:28.957 [2024-12-06 13:29:15.932523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.154 ms 00:34:28.957 [2024-12-06 13:29:15.932536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.932631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.957 [2024-12-06 13:29:15.932650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:28.957 [2024-12-06 13:29:15.932663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:28.957 [2024-12-06 13:29:15.932690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.957 [2024-12-06 13:29:15.934212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 369.847 ms, result 0 00:34:30.334  [2024-12-06T13:29:18.304Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-06T13:29:19.299Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-06T13:29:20.255Z] Copying: 71/1024 [MB] (24 MBps) [2024-12-06T13:29:21.194Z] Copying: 95/1024 [MB] (24 MBps) [2024-12-06T13:29:22.147Z] Copying: 121/1024 [MB] (25 MBps) [2024-12-06T13:29:23.515Z] Copying: 148/1024 [MB] (26 MBps) [2024-12-06T13:29:24.446Z] Copying: 173/1024 [MB] (25 MBps) [2024-12-06T13:29:25.375Z] Copying: 198/1024 [MB] (24 MBps) [2024-12-06T13:29:26.308Z] Copying: 224/1024 [MB] (26 MBps) [2024-12-06T13:29:27.243Z] Copying: 250/1024 [MB] (25 MBps) [2024-12-06T13:29:28.177Z] Copying: 276/1024 [MB] (26 MBps) [2024-12-06T13:29:29.554Z] Copying: 302/1024 [MB] (26 MBps) [2024-12-06T13:29:30.489Z] Copying: 329/1024 [MB] (26 MBps) [2024-12-06T13:29:31.451Z] Copying: 356/1024 [MB] (27 MBps) [2024-12-06T13:29:32.384Z] Copying: 385/1024 [MB] (28 MBps) [2024-12-06T13:29:33.318Z] Copying: 412/1024 [MB] (27 MBps) [2024-12-06T13:29:34.253Z] Copying: 440/1024 [MB] (28 MBps) [2024-12-06T13:29:35.187Z] Copying: 468/1024 [MB] (27 MBps) [2024-12-06T13:29:36.559Z] Copying: 495/1024 [MB] (27 MBps) [2024-12-06T13:29:37.492Z] Copying: 522/1024 [MB] (26 MBps) [2024-12-06T13:29:38.428Z] Copying: 546/1024 [MB] (23 MBps) [2024-12-06T13:29:39.363Z] Copying: 569/1024 [MB] (23 MBps) [2024-12-06T13:29:40.296Z] Copying: 592/1024 [MB] (22 MBps) [2024-12-06T13:29:41.230Z] Copying: 615/1024 [MB] (22 MBps) [2024-12-06T13:29:42.165Z] Copying: 639/1024 [MB] (24 MBps) [2024-12-06T13:29:43.557Z] Copying: 665/1024 [MB] (25 MBps) [2024-12-06T13:29:44.490Z] Copying: 689/1024 [MB] (24 MBps) [2024-12-06T13:29:45.421Z] Copying: 714/1024 [MB] (25 MBps) [2024-12-06T13:29:46.353Z] Copying: 742/1024 [MB] (27 MBps) [2024-12-06T13:29:47.289Z] Copying: 769/1024 [MB] (27 MBps) [2024-12-06T13:29:48.223Z] Copying: 796/1024 [MB] (26 MBps) [2024-12-06T13:29:49.158Z] Copying: 822/1024 [MB] (26 MBps) [2024-12-06T13:29:50.532Z] Copying: 848/1024 [MB] (25 MBps) [2024-12-06T13:29:51.467Z] Copying: 871/1024 [MB] (22 MBps) [2024-12-06T13:29:52.401Z] Copying: 898/1024 [MB] (27 MBps) [2024-12-06T13:29:53.335Z] Copying: 925/1024 [MB] (26 MBps) [2024-12-06T13:29:54.269Z] Copying: 950/1024 [MB] (25 MBps) [2024-12-06T13:29:55.205Z] Copying: 976/1024 [MB] (26 MBps) [2024-12-06T13:29:56.139Z] Copying: 1002/1024 [MB] (26 MBps) [2024-12-06T13:29:56.139Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 13:29:55.967473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:55.967693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:09.123 [2024-12-06 13:29:55.967825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:09.123 [2024-12-06 13:29:55.967879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:55.967951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:09.123 [2024-12-06 13:29:55.972850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:55.973012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:09.123 [2024-12-06 13:29:55.973144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:35:09.123 [2024-12-06 13:29:55.973205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:55.973549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:55.973606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:09.123 [2024-12-06 13:29:55.973648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:35:09.123 [2024-12-06 13:29:55.973786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:55.977825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:55.977982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:09.123 [2024-12-06 13:29:55.978103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.979 ms 00:35:09.123 [2024-12-06 13:29:55.978288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:55.985044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:55.985225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:09.123 [2024-12-06 13:29:55.985340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.673 ms 00:35:09.123 [2024-12-06 13:29:55.985391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:56.018382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:56.018579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:09.123 [2024-12-06 13:29:56.018705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.876 ms 00:35:09.123 [2024-12-06 13:29:56.018756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:56.037532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:56.037717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:09.123 [2024-12-06 13:29:56.037869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.646 ms 00:35:09.123 [2024-12-06 13:29:56.037923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:56.039893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:56.040044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:09.123 [2024-12-06 13:29:56.040178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:35:09.123 [2024-12-06 13:29:56.040308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:56.072317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:56.072579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:09.123 [2024-12-06 13:29:56.072613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.939 ms 00:35:09.123 [2024-12-06 13:29:56.072629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:56.103671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:56.103741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:09.123 [2024-12-06 13:29:56.103763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.960 ms 00:35:09.123 [2024-12-06 13:29:56.103777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.123 [2024-12-06 13:29:56.134404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.123 [2024-12-06 13:29:56.134466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:09.123 [2024-12-06 13:29:56.134487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.568 ms 00:35:09.123 [2024-12-06 13:29:56.134501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.389 [2024-12-06 13:29:56.165235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.389 [2024-12-06 13:29:56.165344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:09.389 [2024-12-06 13:29:56.165368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.617 ms 00:35:09.389 [2024-12-06 13:29:56.165382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.389 [2024-12-06 13:29:56.165439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:09.389 [2024-12-06 13:29:56.165479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:09.389 [2024-12-06 13:29:56.165503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:35:09.389 [2024-12-06 13:29:56.165518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.165994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:09.389 [2024-12-06 13:29:56.166627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:09.390 [2024-12-06 13:29:56.166930] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:09.390 [2024-12-06 13:29:56.166944] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 104883f5-0a7e-4c98-bc4c-b46162b2b89d 00:35:09.390 [2024-12-06 13:29:56.166958] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:35:09.390 [2024-12-06 13:29:56.166971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:09.390 [2024-12-06 13:29:56.166983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:09.390 [2024-12-06 13:29:56.166997] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:09.390 [2024-12-06 13:29:56.167026] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:09.390 [2024-12-06 13:29:56.167039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:09.390 [2024-12-06 13:29:56.167052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:09.390 [2024-12-06 13:29:56.167063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:09.390 [2024-12-06 13:29:56.167075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:09.390 [2024-12-06 13:29:56.167087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.390 [2024-12-06 13:29:56.167112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:09.390 [2024-12-06 13:29:56.167138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:35:09.390 [2024-12-06 13:29:56.167159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.390 [2024-12-06 13:29:56.184918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.390 [2024-12-06 13:29:56.184994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:09.390 [2024-12-06 13:29:56.185017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.675 ms 00:35:09.390 [2024-12-06 13:29:56.185032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.390 [2024-12-06 13:29:56.185606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:09.390 [2024-12-06 13:29:56.185648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:09.390 [2024-12-06 13:29:56.185664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:35:09.390 [2024-12-06 13:29:56.185679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.390 [2024-12-06 13:29:56.234930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.390 [2024-12-06 13:29:56.235026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:09.390 [2024-12-06 13:29:56.235048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.390 [2024-12-06 13:29:56.235064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.390 [2024-12-06 13:29:56.235201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.390 [2024-12-06 13:29:56.235230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:09.390 [2024-12-06 13:29:56.235245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.390 [2024-12-06 13:29:56.235258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.390 [2024-12-06 13:29:56.235381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.390 [2024-12-06 13:29:56.235403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:09.390 [2024-12-06 13:29:56.235419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.390 [2024-12-06 13:29:56.235432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.390 [2024-12-06 13:29:56.235459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.390 [2024-12-06 13:29:56.235476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:09.390 [2024-12-06 13:29:56.235497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.390 [2024-12-06 13:29:56.235511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.390 [2024-12-06 13:29:56.361316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.390 [2024-12-06 13:29:56.361414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:09.390 [2024-12-06 13:29:56.361437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.390 [2024-12-06 13:29:56.361451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.454612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.673 [2024-12-06 13:29:56.454693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:09.673 [2024-12-06 13:29:56.454715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.673 [2024-12-06 13:29:56.454730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.454918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.673 [2024-12-06 13:29:56.454939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:09.673 [2024-12-06 13:29:56.454953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.673 [2024-12-06 13:29:56.454967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.455023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.673 [2024-12-06 13:29:56.455041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:09.673 [2024-12-06 13:29:56.455056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.673 [2024-12-06 13:29:56.455078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.455270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.673 [2024-12-06 13:29:56.455292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:09.673 [2024-12-06 13:29:56.455308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.673 [2024-12-06 13:29:56.455322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.455380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.673 [2024-12-06 13:29:56.455400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:09.673 [2024-12-06 13:29:56.455416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.673 [2024-12-06 13:29:56.455430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.455513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.673 [2024-12-06 13:29:56.455533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:09.673 [2024-12-06 13:29:56.455547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.673 [2024-12-06 13:29:56.455562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.455643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:09.673 [2024-12-06 13:29:56.455662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:09.673 [2024-12-06 13:29:56.455677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:09.673 [2024-12-06 13:29:56.455707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:09.673 [2024-12-06 13:29:56.455927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 488.398 ms, result 0 00:35:10.612 00:35:10.612 00:35:10.612 13:29:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:13.143 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:35:13.143 13:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:35:13.143 13:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:35:13.143 13:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:13.143 13:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:13.143 13:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:13.143 13:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:13.143 13:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:13.143 13:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81281 00:35:13.143 13:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81281 ']' 00:35:13.143 13:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81281 00:35:13.143 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81281) - No such process 00:35:13.144 Process with pid 81281 is not found 00:35:13.144 13:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81281 is not found' 00:35:13.144 13:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:35:13.711 Remove shared memory files 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:35:13.711 00:35:13.711 real 4m2.014s 00:35:13.711 user 4m46.345s 00:35:13.711 sys 0m41.169s 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.711 13:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:13.711 ************************************ 00:35:13.711 END TEST ftl_dirty_shutdown 00:35:13.711 ************************************ 00:35:13.711 13:30:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:13.711 13:30:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:13.711 13:30:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.711 13:30:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:13.711 ************************************ 00:35:13.711 START TEST ftl_upgrade_shutdown 00:35:13.711 ************************************ 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:13.711 * Looking for test storage... 00:35:13.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.711 --rc genhtml_branch_coverage=1 00:35:13.711 --rc genhtml_function_coverage=1 00:35:13.711 --rc genhtml_legend=1 00:35:13.711 --rc geninfo_all_blocks=1 00:35:13.711 --rc geninfo_unexecuted_blocks=1 00:35:13.711 00:35:13.711 ' 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.711 --rc genhtml_branch_coverage=1 00:35:13.711 --rc genhtml_function_coverage=1 00:35:13.711 --rc genhtml_legend=1 00:35:13.711 --rc geninfo_all_blocks=1 00:35:13.711 --rc geninfo_unexecuted_blocks=1 00:35:13.711 00:35:13.711 ' 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.711 --rc genhtml_branch_coverage=1 00:35:13.711 --rc genhtml_function_coverage=1 00:35:13.711 --rc genhtml_legend=1 00:35:13.711 --rc geninfo_all_blocks=1 00:35:13.711 --rc geninfo_unexecuted_blocks=1 00:35:13.711 00:35:13.711 ' 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.711 --rc genhtml_branch_coverage=1 00:35:13.711 --rc genhtml_function_coverage=1 00:35:13.711 --rc genhtml_legend=1 00:35:13.711 --rc geninfo_all_blocks=1 00:35:13.711 --rc geninfo_unexecuted_blocks=1 00:35:13.711 00:35:13.711 ' 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:13.711 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:35:13.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:35:13.970 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83760 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83760 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83760 ']' 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.971 13:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:13.971 [2024-12-06 13:30:00.857868] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:35:13.971 [2024-12-06 13:30:00.858032] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83760 ] 00:35:14.229 [2024-12-06 13:30:01.041272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.229 [2024-12-06 13:30:01.211283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:15.609 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:35:16.176 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:16.176 { 00:35:16.176 "name": "basen1", 00:35:16.176 "aliases": [ 00:35:16.176 "0ce8b562-9926-4842-9918-3d96318d6b06" 00:35:16.176 ], 00:35:16.176 "product_name": "NVMe disk", 00:35:16.176 "block_size": 4096, 00:35:16.176 "num_blocks": 1310720, 00:35:16.176 "uuid": "0ce8b562-9926-4842-9918-3d96318d6b06", 00:35:16.176 "numa_id": -1, 00:35:16.176 "assigned_rate_limits": { 00:35:16.176 "rw_ios_per_sec": 0, 00:35:16.176 "rw_mbytes_per_sec": 0, 00:35:16.176 "r_mbytes_per_sec": 0, 00:35:16.176 "w_mbytes_per_sec": 0 00:35:16.176 }, 00:35:16.176 "claimed": true, 00:35:16.176 "claim_type": "read_many_write_one", 00:35:16.176 "zoned": false, 00:35:16.176 "supported_io_types": { 00:35:16.176 "read": true, 00:35:16.176 "write": true, 00:35:16.176 "unmap": true, 00:35:16.176 "flush": true, 00:35:16.176 "reset": true, 00:35:16.176 "nvme_admin": true, 00:35:16.176 "nvme_io": true, 00:35:16.176 "nvme_io_md": false, 00:35:16.176 "write_zeroes": true, 00:35:16.176 "zcopy": false, 00:35:16.176 "get_zone_info": false, 00:35:16.176 "zone_management": false, 00:35:16.176 "zone_append": false, 00:35:16.176 "compare": true, 00:35:16.176 "compare_and_write": false, 00:35:16.176 "abort": true, 00:35:16.176 "seek_hole": false, 00:35:16.176 "seek_data": false, 00:35:16.176 "copy": true, 00:35:16.176 "nvme_iov_md": false 00:35:16.176 }, 00:35:16.176 "driver_specific": { 00:35:16.176 "nvme": [ 00:35:16.176 { 00:35:16.176 "pci_address": "0000:00:11.0", 00:35:16.176 "trid": { 00:35:16.176 "trtype": "PCIe", 00:35:16.176 "traddr": "0000:00:11.0" 00:35:16.176 }, 00:35:16.176 "ctrlr_data": { 00:35:16.176 "cntlid": 0, 00:35:16.176 "vendor_id": "0x1b36", 00:35:16.176 "model_number": "QEMU NVMe Ctrl", 00:35:16.176 "serial_number": "12341", 00:35:16.176 "firmware_revision": "8.0.0", 00:35:16.176 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:16.176 "oacs": { 00:35:16.176 "security": 0, 00:35:16.176 "format": 1, 00:35:16.176 "firmware": 0, 00:35:16.176 "ns_manage": 1 00:35:16.176 }, 00:35:16.176 "multi_ctrlr": false, 00:35:16.176 "ana_reporting": false 00:35:16.176 }, 00:35:16.176 "vs": { 00:35:16.176 "nvme_version": "1.4" 00:35:16.176 }, 00:35:16.176 "ns_data": { 00:35:16.176 "id": 1, 00:35:16.176 "can_share": false 00:35:16.176 } 00:35:16.176 } 00:35:16.176 ], 00:35:16.176 "mp_policy": "active_passive" 00:35:16.176 } 00:35:16.176 } 00:35:16.176 ]' 00:35:16.176 13:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:16.176 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:16.435 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8627b79b-036b-4011-8234-993d4ad9ef5f 00:35:16.435 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:35:16.435 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8627b79b-036b-4011-8234-993d4ad9ef5f 00:35:16.692 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:35:16.949 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=06855fda-e6c6-4c50-81bb-23b8361efe1f 00:35:16.949 13:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 06855fda-e6c6-4c50-81bb-23b8361efe1f 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=52b301f9-1001-456b-b654-d13d0af6bc81 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 52b301f9-1001-456b-b654-d13d0af6bc81 ]] 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 52b301f9-1001-456b-b654-d13d0af6bc81 5120 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=52b301f9-1001-456b-b654-d13d0af6bc81 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 52b301f9-1001-456b-b654-d13d0af6bc81 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=52b301f9-1001-456b-b654-d13d0af6bc81 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:17.514 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52b301f9-1001-456b-b654-d13d0af6bc81 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:17.772 { 00:35:17.772 "name": "52b301f9-1001-456b-b654-d13d0af6bc81", 00:35:17.772 "aliases": [ 00:35:17.772 "lvs/basen1p0" 00:35:17.772 ], 00:35:17.772 "product_name": "Logical Volume", 00:35:17.772 "block_size": 4096, 00:35:17.772 "num_blocks": 5242880, 00:35:17.772 "uuid": "52b301f9-1001-456b-b654-d13d0af6bc81", 00:35:17.772 "assigned_rate_limits": { 00:35:17.772 "rw_ios_per_sec": 0, 00:35:17.772 "rw_mbytes_per_sec": 0, 00:35:17.772 "r_mbytes_per_sec": 0, 00:35:17.772 "w_mbytes_per_sec": 0 00:35:17.772 }, 00:35:17.772 "claimed": false, 00:35:17.772 "zoned": false, 00:35:17.772 "supported_io_types": { 00:35:17.772 "read": true, 00:35:17.772 "write": true, 00:35:17.772 "unmap": true, 00:35:17.772 "flush": false, 00:35:17.772 "reset": true, 00:35:17.772 "nvme_admin": false, 00:35:17.772 "nvme_io": false, 00:35:17.772 "nvme_io_md": false, 00:35:17.772 "write_zeroes": true, 00:35:17.772 "zcopy": false, 00:35:17.772 "get_zone_info": false, 00:35:17.772 "zone_management": false, 00:35:17.772 "zone_append": false, 00:35:17.772 "compare": false, 00:35:17.772 "compare_and_write": false, 00:35:17.772 "abort": false, 00:35:17.772 "seek_hole": true, 00:35:17.772 "seek_data": true, 00:35:17.772 "copy": false, 00:35:17.772 "nvme_iov_md": false 00:35:17.772 }, 00:35:17.772 "driver_specific": { 00:35:17.772 "lvol": { 00:35:17.772 "lvol_store_uuid": "06855fda-e6c6-4c50-81bb-23b8361efe1f", 00:35:17.772 "base_bdev": "basen1", 00:35:17.772 "thin_provision": true, 00:35:17.772 "num_allocated_clusters": 0, 00:35:17.772 "snapshot": false, 00:35:17.772 "clone": false, 00:35:17.772 "esnap_clone": false 00:35:17.772 } 00:35:17.772 } 00:35:17.772 } 00:35:17.772 ]' 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:35:17.772 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:35:18.030 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:35:18.030 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:35:18.030 13:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:35:18.287 13:30:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:35:18.287 13:30:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:35:18.287 13:30:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 52b301f9-1001-456b-b654-d13d0af6bc81 -c cachen1p0 --l2p_dram_limit 2 00:35:18.547 [2024-12-06 13:30:05.491306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.491400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:18.547 [2024-12-06 13:30:05.491432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:18.547 [2024-12-06 13:30:05.491449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.491553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.491577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:18.547 [2024-12-06 13:30:05.491597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:35:18.547 [2024-12-06 13:30:05.491613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.491662] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:18.547 [2024-12-06 13:30:05.492730] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:18.547 [2024-12-06 13:30:05.492791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.492811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:18.547 [2024-12-06 13:30:05.492833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:35:18.547 [2024-12-06 13:30:05.492849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.493013] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID ed9de9cd-2edd-4475-875f-95f7606860db 00:35:18.547 [2024-12-06 13:30:05.495058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.495117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:35:18.547 [2024-12-06 13:30:05.495159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:35:18.547 [2024-12-06 13:30:05.495181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.505227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.505328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:18.547 [2024-12-06 13:30:05.505362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.942 ms 00:35:18.547 [2024-12-06 13:30:05.505381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.505499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.505530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:18.547 [2024-12-06 13:30:05.505547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:35:18.547 [2024-12-06 13:30:05.505569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.505702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.505734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:18.547 [2024-12-06 13:30:05.505754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:35:18.547 [2024-12-06 13:30:05.505772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.505830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:18.547 [2024-12-06 13:30:05.511334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.511384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:18.547 [2024-12-06 13:30:05.511412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.526 ms 00:35:18.547 [2024-12-06 13:30:05.511437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.511498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.511521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:18.547 [2024-12-06 13:30:05.511540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:18.547 [2024-12-06 13:30:05.511556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.511620] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:35:18.547 [2024-12-06 13:30:05.511800] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:18.547 [2024-12-06 13:30:05.511835] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:18.547 [2024-12-06 13:30:05.511857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:18.547 [2024-12-06 13:30:05.511879] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:18.547 [2024-12-06 13:30:05.511897] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:18.547 [2024-12-06 13:30:05.511915] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:18.547 [2024-12-06 13:30:05.511931] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:18.547 [2024-12-06 13:30:05.511955] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:18.547 [2024-12-06 13:30:05.511970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:18.547 [2024-12-06 13:30:05.511995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.512010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:18.547 [2024-12-06 13:30:05.512028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 00:35:18.547 [2024-12-06 13:30:05.512044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.512169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.547 [2024-12-06 13:30:05.512207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:18.547 [2024-12-06 13:30:05.512228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.089 ms 00:35:18.547 [2024-12-06 13:30:05.512244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.547 [2024-12-06 13:30:05.512375] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:18.547 [2024-12-06 13:30:05.512398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:18.547 [2024-12-06 13:30:05.512419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:18.547 [2024-12-06 13:30:05.512435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.547 [2024-12-06 13:30:05.512453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:18.547 [2024-12-06 13:30:05.512468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:18.547 [2024-12-06 13:30:05.512486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:18.547 [2024-12-06 13:30:05.512501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:18.547 [2024-12-06 13:30:05.512518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:18.547 [2024-12-06 13:30:05.512533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.547 [2024-12-06 13:30:05.512550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:18.547 [2024-12-06 13:30:05.512564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:18.547 [2024-12-06 13:30:05.512584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.547 [2024-12-06 13:30:05.512599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:18.547 [2024-12-06 13:30:05.512616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:18.547 [2024-12-06 13:30:05.512630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.547 [2024-12-06 13:30:05.512650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:18.548 [2024-12-06 13:30:05.512664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:18.548 [2024-12-06 13:30:05.512682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.548 [2024-12-06 13:30:05.512697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:18.548 [2024-12-06 13:30:05.512714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:18.548 [2024-12-06 13:30:05.512728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:18.548 [2024-12-06 13:30:05.512745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:18.548 [2024-12-06 13:30:05.512761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:18.548 [2024-12-06 13:30:05.512778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:18.548 [2024-12-06 13:30:05.512792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:18.548 [2024-12-06 13:30:05.512809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:18.548 [2024-12-06 13:30:05.512824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:18.548 [2024-12-06 13:30:05.512841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:18.548 [2024-12-06 13:30:05.512869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:18.548 [2024-12-06 13:30:05.512893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:18.548 [2024-12-06 13:30:05.512909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:18.548 [2024-12-06 13:30:05.512929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:18.548 [2024-12-06 13:30:05.512944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.548 [2024-12-06 13:30:05.512961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:18.548 [2024-12-06 13:30:05.512976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:18.548 [2024-12-06 13:30:05.512993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.548 [2024-12-06 13:30:05.513007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:18.548 [2024-12-06 13:30:05.513026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:18.548 [2024-12-06 13:30:05.513041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.548 [2024-12-06 13:30:05.513058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:18.548 [2024-12-06 13:30:05.513073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:18.548 [2024-12-06 13:30:05.513090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.548 [2024-12-06 13:30:05.513103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:18.548 [2024-12-06 13:30:05.513122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:18.548 [2024-12-06 13:30:05.513157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:18.548 [2024-12-06 13:30:05.513177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:18.548 [2024-12-06 13:30:05.513193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:18.548 [2024-12-06 13:30:05.513213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:18.548 [2024-12-06 13:30:05.513228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:18.548 [2024-12-06 13:30:05.513245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:18.548 [2024-12-06 13:30:05.513260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:18.548 [2024-12-06 13:30:05.513276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:18.548 [2024-12-06 13:30:05.513293] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:18.548 [2024-12-06 13:30:05.513321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:18.548 [2024-12-06 13:30:05.513356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:18.548 [2024-12-06 13:30:05.513404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:18.548 [2024-12-06 13:30:05.513423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:18.548 [2024-12-06 13:30:05.513439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:18.548 [2024-12-06 13:30:05.513457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:18.548 [2024-12-06 13:30:05.513576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:18.548 [2024-12-06 13:30:05.513597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:18.548 [2024-12-06 13:30:05.513636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:18.548 [2024-12-06 13:30:05.513652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:18.548 [2024-12-06 13:30:05.513669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:18.548 [2024-12-06 13:30:05.513686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:18.548 [2024-12-06 13:30:05.513704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:18.548 [2024-12-06 13:30:05.513722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.387 ms 00:35:18.548 [2024-12-06 13:30:05.513739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:18.548 [2024-12-06 13:30:05.513807] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:18.548 [2024-12-06 13:30:05.513854] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:23.901 [2024-12-06 13:30:10.283185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.283302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:23.901 [2024-12-06 13:30:10.283326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4769.402 ms 00:35:23.901 [2024-12-06 13:30:10.283344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.329095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.329197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:23.901 [2024-12-06 13:30:10.329220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.384 ms 00:35:23.901 [2024-12-06 13:30:10.329237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.329428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.329451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:23.901 [2024-12-06 13:30:10.329466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:35:23.901 [2024-12-06 13:30:10.329488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.379315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.379410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:23.901 [2024-12-06 13:30:10.379431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.746 ms 00:35:23.901 [2024-12-06 13:30:10.379449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.379521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.379545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:23.901 [2024-12-06 13:30:10.379559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:23.901 [2024-12-06 13:30:10.379573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.380521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.380574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:23.901 [2024-12-06 13:30:10.380602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:35:23.901 [2024-12-06 13:30:10.380618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.380677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.380695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:23.901 [2024-12-06 13:30:10.380711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:35:23.901 [2024-12-06 13:30:10.380728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.405996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.406063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:23.901 [2024-12-06 13:30:10.406084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.240 ms 00:35:23.901 [2024-12-06 13:30:10.406100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.429441] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:23.901 [2024-12-06 13:30:10.431299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.431332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:23.901 [2024-12-06 13:30:10.431357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.021 ms 00:35:23.901 [2024-12-06 13:30:10.431370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.470478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.470526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:35:23.901 [2024-12-06 13:30:10.470548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.066 ms 00:35:23.901 [2024-12-06 13:30:10.470567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.470723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.470746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:23.901 [2024-12-06 13:30:10.470767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.104 ms 00:35:23.901 [2024-12-06 13:30:10.470779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.497974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.498016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:35:23.901 [2024-12-06 13:30:10.498037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.129 ms 00:35:23.901 [2024-12-06 13:30:10.498050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.525564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.525617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:35:23.901 [2024-12-06 13:30:10.525639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.458 ms 00:35:23.901 [2024-12-06 13:30:10.525651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.526433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.526467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:23.901 [2024-12-06 13:30:10.526486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.734 ms 00:35:23.901 [2024-12-06 13:30:10.526501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.637405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.637485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:35:23.901 [2024-12-06 13:30:10.637550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 110.836 ms 00:35:23.901 [2024-12-06 13:30:10.637564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.671842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.671905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:35:23.901 [2024-12-06 13:30:10.671961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.172 ms 00:35:23.901 [2024-12-06 13:30:10.671974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.704754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.704829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:35:23.901 [2024-12-06 13:30:10.704868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.717 ms 00:35:23.901 [2024-12-06 13:30:10.704882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.736888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.736961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:23.901 [2024-12-06 13:30:10.736987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.927 ms 00:35:23.901 [2024-12-06 13:30:10.737001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.737080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.737100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:23.901 [2024-12-06 13:30:10.737121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:23.901 [2024-12-06 13:30:10.737133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.737308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.901 [2024-12-06 13:30:10.737332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:23.901 [2024-12-06 13:30:10.737349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:35:23.901 [2024-12-06 13:30:10.737361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.901 [2024-12-06 13:30:10.739008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5247.167 ms, result 0 00:35:23.901 { 00:35:23.901 "name": "ftl", 00:35:23.901 "uuid": "ed9de9cd-2edd-4475-875f-95f7606860db" 00:35:23.901 } 00:35:23.902 13:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:35:24.159 [2024-12-06 13:30:11.001700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.160 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:35:24.418 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:35:24.676 [2024-12-06 13:30:11.574530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:24.676 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:35:24.933 [2024-12-06 13:30:11.838378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:24.933 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:35:25.500 Fill FTL, iteration 1 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83910 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83910 /var/tmp/spdk.tgt.sock 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83910 ']' 00:35:25.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.500 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:25.500 [2024-12-06 13:30:12.494098] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:35:25.500 [2024-12-06 13:30:12.494293] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83910 ] 00:35:25.759 [2024-12-06 13:30:12.683887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.017 [2024-12-06 13:30:12.833733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.951 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.951 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:26.951 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:35:27.210 ftln1 00:35:27.210 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:35:27.210 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83910 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83910 ']' 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83910 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83910 00:35:27.469 killing process with pid 83910 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83910' 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83910 00:35:27.469 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83910 00:35:30.046 13:30:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:35:30.046 13:30:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:30.046 [2024-12-06 13:30:16.563472] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:35:30.046 [2024-12-06 13:30:16.563679] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83963 ] 00:35:30.046 [2024-12-06 13:30:16.743539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.046 [2024-12-06 13:30:16.864597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.422  [2024-12-06T13:30:19.395Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-06T13:30:20.328Z] Copying: 419/1024 [MB] (211 MBps) [2024-12-06T13:30:21.701Z] Copying: 630/1024 [MB] (211 MBps) [2024-12-06T13:30:22.264Z] Copying: 836/1024 [MB] (206 MBps) [2024-12-06T13:30:23.639Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:35:36.623 00:35:36.623 Calculate MD5 checksum, iteration 1 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:36.623 13:30:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:36.623 [2024-12-06 13:30:23.484353] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:35:36.623 [2024-12-06 13:30:23.484563] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84030 ] 00:35:36.883 [2024-12-06 13:30:23.673348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.883 [2024-12-06 13:30:23.807280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.262  [2024-12-06T13:30:26.653Z] Copying: 511/1024 [MB] (511 MBps) [2024-12-06T13:30:26.653Z] Copying: 1013/1024 [MB] (502 MBps) [2024-12-06T13:30:27.221Z] Copying: 1024/1024 [MB] (average 505 MBps) 00:35:40.205 00:35:40.205 13:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:35:40.205 13:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:42.792 Fill FTL, iteration 2 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=829ed6acb6522a009413f8575e825889 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:42.792 13:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:42.792 [2024-12-06 13:30:29.543862] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:35:42.792 [2024-12-06 13:30:29.544402] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84093 ] 00:35:42.792 [2024-12-06 13:30:29.738056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.052 [2024-12-06 13:30:29.902105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.430  [2024-12-06T13:30:32.383Z] Copying: 207/1024 [MB] (207 MBps) [2024-12-06T13:30:33.759Z] Copying: 412/1024 [MB] (205 MBps) [2024-12-06T13:30:34.694Z] Copying: 610/1024 [MB] (198 MBps) [2024-12-06T13:30:35.629Z] Copying: 826/1024 [MB] (216 MBps) [2024-12-06T13:30:37.004Z] Copying: 1024/1024 [MB] (average 208 MBps) 00:35:49.988 00:35:49.988 Calculate MD5 checksum, iteration 2 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:49.988 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:49.988 [2024-12-06 13:30:36.705210] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:35:49.988 [2024-12-06 13:30:36.705735] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84164 ] 00:35:49.988 [2024-12-06 13:30:36.895415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.248 [2024-12-06 13:30:37.120358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.169  [2024-12-06T13:30:40.121Z] Copying: 465/1024 [MB] (465 MBps) [2024-12-06T13:30:40.379Z] Copying: 961/1024 [MB] (496 MBps) [2024-12-06T13:30:41.799Z] Copying: 1024/1024 [MB] (average 475 MBps) 00:35:54.783 00:35:54.783 13:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:35:54.783 13:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:56.685 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:56.685 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ea9a5213d1066c03ae2e61961192ce08 00:35:56.685 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:56.685 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:56.685 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:56.943 [2024-12-06 13:30:43.889221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:56.943 [2024-12-06 13:30:43.889319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:56.943 [2024-12-06 13:30:43.889345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:56.943 [2024-12-06 13:30:43.889359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:56.943 [2024-12-06 13:30:43.889404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:56.943 [2024-12-06 13:30:43.889430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:56.943 [2024-12-06 13:30:43.889445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:56.943 [2024-12-06 13:30:43.889457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:56.943 [2024-12-06 13:30:43.889488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:56.943 [2024-12-06 13:30:43.889503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:56.943 [2024-12-06 13:30:43.889518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:56.943 [2024-12-06 13:30:43.889531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:56.943 [2024-12-06 13:30:43.889623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.396 ms, result 0 00:35:56.943 true 00:35:56.943 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:57.202 { 00:35:57.202 "name": "ftl", 00:35:57.202 "properties": [ 00:35:57.202 { 00:35:57.202 "name": "superblock_version", 00:35:57.202 "value": 5, 00:35:57.202 "read-only": true 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "name": "base_device", 00:35:57.202 "bands": [ 00:35:57.202 { 00:35:57.202 "id": 0, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 1, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 2, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 3, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 4, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 5, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 6, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 7, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 8, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 9, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 10, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 11, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 12, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 13, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 14, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 15, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 16, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 17, 00:35:57.202 "state": "FREE", 00:35:57.202 "validity": 0.0 00:35:57.202 } 00:35:57.202 ], 00:35:57.202 "read-only": true 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "name": "cache_device", 00:35:57.202 "type": "bdev", 00:35:57.202 "chunks": [ 00:35:57.202 { 00:35:57.202 "id": 0, 00:35:57.202 "state": "INACTIVE", 00:35:57.202 "utilization": 0.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 1, 00:35:57.202 "state": "CLOSED", 00:35:57.202 "utilization": 1.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 2, 00:35:57.202 "state": "CLOSED", 00:35:57.202 "utilization": 1.0 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 3, 00:35:57.202 "state": "OPEN", 00:35:57.202 "utilization": 0.001953125 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "id": 4, 00:35:57.202 "state": "OPEN", 00:35:57.202 "utilization": 0.0 00:35:57.202 } 00:35:57.202 ], 00:35:57.202 "read-only": true 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "name": "verbose_mode", 00:35:57.202 "value": true, 00:35:57.202 "unit": "", 00:35:57.202 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:57.202 }, 00:35:57.202 { 00:35:57.202 "name": "prep_upgrade_on_shutdown", 00:35:57.202 "value": false, 00:35:57.202 "unit": "", 00:35:57.202 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:57.202 } 00:35:57.202 ] 00:35:57.202 } 00:35:57.461 13:30:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:35:57.719 [2024-12-06 13:30:44.510044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.719 [2024-12-06 13:30:44.510161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:57.719 [2024-12-06 13:30:44.510189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:57.719 [2024-12-06 13:30:44.510203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.719 [2024-12-06 13:30:44.510249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.719 [2024-12-06 13:30:44.510266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:57.719 [2024-12-06 13:30:44.510280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:57.719 [2024-12-06 13:30:44.510292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.719 [2024-12-06 13:30:44.510321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.719 [2024-12-06 13:30:44.510368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:57.719 [2024-12-06 13:30:44.510382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:57.720 [2024-12-06 13:30:44.510394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.720 [2024-12-06 13:30:44.510486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.449 ms, result 0 00:35:57.720 true 00:35:57.720 13:30:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:57.720 13:30:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:35:57.720 13:30:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:57.977 13:30:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:35:57.977 13:30:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:35:57.977 13:30:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:58.235 [2024-12-06 13:30:45.065263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:58.235 [2024-12-06 13:30:45.065365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:58.235 [2024-12-06 13:30:45.065390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:58.235 [2024-12-06 13:30:45.065403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:58.235 [2024-12-06 13:30:45.065444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:58.235 [2024-12-06 13:30:45.065461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:58.235 [2024-12-06 13:30:45.065475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:58.235 [2024-12-06 13:30:45.065487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:58.235 [2024-12-06 13:30:45.065516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:58.235 [2024-12-06 13:30:45.065531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:58.235 [2024-12-06 13:30:45.065543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:58.235 [2024-12-06 13:30:45.065556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:58.235 [2024-12-06 13:30:45.065660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.371 ms, result 0 00:35:58.235 true 00:35:58.235 13:30:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:58.494 { 00:35:58.494 "name": "ftl", 00:35:58.494 "properties": [ 00:35:58.494 { 00:35:58.494 "name": "superblock_version", 00:35:58.494 "value": 5, 00:35:58.494 "read-only": true 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "name": "base_device", 00:35:58.494 "bands": [ 00:35:58.494 { 00:35:58.494 "id": 0, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 1, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 2, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 3, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 4, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 5, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 6, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 7, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 8, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 9, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 10, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 11, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 12, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 13, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 14, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 15, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 16, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 17, 00:35:58.494 "state": "FREE", 00:35:58.494 "validity": 0.0 00:35:58.494 } 00:35:58.494 ], 00:35:58.494 "read-only": true 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "name": "cache_device", 00:35:58.494 "type": "bdev", 00:35:58.494 "chunks": [ 00:35:58.494 { 00:35:58.494 "id": 0, 00:35:58.494 "state": "INACTIVE", 00:35:58.494 "utilization": 0.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 1, 00:35:58.494 "state": "CLOSED", 00:35:58.494 "utilization": 1.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 2, 00:35:58.494 "state": "CLOSED", 00:35:58.494 "utilization": 1.0 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 3, 00:35:58.494 "state": "OPEN", 00:35:58.494 "utilization": 0.001953125 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "id": 4, 00:35:58.494 "state": "OPEN", 00:35:58.494 "utilization": 0.0 00:35:58.494 } 00:35:58.494 ], 00:35:58.494 "read-only": true 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "name": "verbose_mode", 00:35:58.494 "value": true, 00:35:58.494 "unit": "", 00:35:58.494 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:58.494 }, 00:35:58.494 { 00:35:58.494 "name": "prep_upgrade_on_shutdown", 00:35:58.494 "value": true, 00:35:58.494 "unit": "", 00:35:58.494 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:58.494 } 00:35:58.494 ] 00:35:58.494 } 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83760 ]] 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83760 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83760 ']' 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83760 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83760 00:35:58.494 killing process with pid 83760 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83760' 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83760 00:35:58.494 13:30:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83760 00:35:59.864 [2024-12-06 13:30:46.691311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:35:59.864 [2024-12-06 13:30:46.709891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.864 [2024-12-06 13:30:46.709963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:35:59.864 [2024-12-06 13:30:46.709991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:59.864 [2024-12-06 13:30:46.710020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.864 [2024-12-06 13:30:46.710098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:35:59.864 [2024-12-06 13:30:46.715483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.864 [2024-12-06 13:30:46.715550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:35:59.864 [2024-12-06 13:30:46.715568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.361 ms 00:35:59.864 [2024-12-06 13:30:46.715589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.653503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.653645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:09.838 [2024-12-06 13:30:55.653682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8937.891 ms 00:36:09.838 [2024-12-06 13:30:55.653697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.655078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.655113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:09.838 [2024-12-06 13:30:55.655129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.352 ms 00:36:09.838 [2024-12-06 13:30:55.655140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.656386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.656433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:09.838 [2024-12-06 13:30:55.656450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.192 ms 00:36:09.838 [2024-12-06 13:30:55.656470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.670394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.670440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:09.838 [2024-12-06 13:30:55.670458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.873 ms 00:36:09.838 [2024-12-06 13:30:55.670472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.679534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.679581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:09.838 [2024-12-06 13:30:55.679599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.012 ms 00:36:09.838 [2024-12-06 13:30:55.679612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.679742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.679801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:09.838 [2024-12-06 13:30:55.679817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:36:09.838 [2024-12-06 13:30:55.679829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.692495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.692538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:09.838 [2024-12-06 13:30:55.692555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.642 ms 00:36:09.838 [2024-12-06 13:30:55.692567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.705206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.705251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:09.838 [2024-12-06 13:30:55.705268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.597 ms 00:36:09.838 [2024-12-06 13:30:55.705279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.717858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.717898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:09.838 [2024-12-06 13:30:55.717915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.538 ms 00:36:09.838 [2024-12-06 13:30:55.717927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.730103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.730153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:09.838 [2024-12-06 13:30:55.730170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.065 ms 00:36:09.838 [2024-12-06 13:30:55.730197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.730254] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:09.838 [2024-12-06 13:30:55.730296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:09.838 [2024-12-06 13:30:55.730313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:09.838 [2024-12-06 13:30:55.730352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:09.838 [2024-12-06 13:30:55.730370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:09.838 [2024-12-06 13:30:55.730572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:09.838 [2024-12-06 13:30:55.730585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ed9de9cd-2edd-4475-875f-95f7606860db 00:36:09.838 [2024-12-06 13:30:55.730600] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:09.838 [2024-12-06 13:30:55.730612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:36:09.838 [2024-12-06 13:30:55.730624] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:36:09.838 [2024-12-06 13:30:55.730637] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:36:09.838 [2024-12-06 13:30:55.730656] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:09.838 [2024-12-06 13:30:55.730669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:09.838 [2024-12-06 13:30:55.730687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:09.838 [2024-12-06 13:30:55.730698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:09.838 [2024-12-06 13:30:55.730708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:09.838 [2024-12-06 13:30:55.730720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.730733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:09.838 [2024-12-06 13:30:55.730748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:36:09.838 [2024-12-06 13:30:55.730760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.749331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.749414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:09.838 [2024-12-06 13:30:55.749440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.544 ms 00:36:09.838 [2024-12-06 13:30:55.749453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.750079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.838 [2024-12-06 13:30:55.750096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:09.838 [2024-12-06 13:30:55.750109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.596 ms 00:36:09.838 [2024-12-06 13:30:55.750119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.814360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.838 [2024-12-06 13:30:55.814459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:09.838 [2024-12-06 13:30:55.814482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.838 [2024-12-06 13:30:55.814496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.814579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.838 [2024-12-06 13:30:55.814596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:09.838 [2024-12-06 13:30:55.814610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.838 [2024-12-06 13:30:55.814622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.814795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.838 [2024-12-06 13:30:55.814817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:09.838 [2024-12-06 13:30:55.814839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.838 [2024-12-06 13:30:55.814852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.814880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.838 [2024-12-06 13:30:55.814896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:09.838 [2024-12-06 13:30:55.814909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.838 [2024-12-06 13:30:55.814921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:55.939350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.838 [2024-12-06 13:30:55.939477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:09.838 [2024-12-06 13:30:55.939507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.838 [2024-12-06 13:30:55.939519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.838 [2024-12-06 13:30:56.032092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.838 [2024-12-06 13:30:56.032215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:09.839 [2024-12-06 13:30:56.032239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.839 [2024-12-06 13:30:56.032253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.839 [2024-12-06 13:30:56.032441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.839 [2024-12-06 13:30:56.032462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:09.839 [2024-12-06 13:30:56.032476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.839 [2024-12-06 13:30:56.032496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.839 [2024-12-06 13:30:56.032565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.839 [2024-12-06 13:30:56.032582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:09.839 [2024-12-06 13:30:56.032595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.839 [2024-12-06 13:30:56.032606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.839 [2024-12-06 13:30:56.032778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.839 [2024-12-06 13:30:56.032800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:09.839 [2024-12-06 13:30:56.032814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.839 [2024-12-06 13:30:56.032827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.839 [2024-12-06 13:30:56.032897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.839 [2024-12-06 13:30:56.032917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:09.839 [2024-12-06 13:30:56.032930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.839 [2024-12-06 13:30:56.032944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.839 [2024-12-06 13:30:56.033030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.839 [2024-12-06 13:30:56.033060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:09.839 [2024-12-06 13:30:56.033072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.839 [2024-12-06 13:30:56.033083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.839 [2024-12-06 13:30:56.033176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:09.839 [2024-12-06 13:30:56.033197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:09.839 [2024-12-06 13:30:56.033210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:09.839 [2024-12-06 13:30:56.033223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.839 [2024-12-06 13:30:56.033433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9323.521 ms, result 0 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84411 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84411 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84411 ']' 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:13.138 13:30:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:13.138 [2024-12-06 13:30:59.738756] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:36:13.138 [2024-12-06 13:30:59.738951] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84411 ] 00:36:13.138 [2024-12-06 13:30:59.926227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.138 [2024-12-06 13:31:00.082342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.516 [2024-12-06 13:31:01.201813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:14.516 [2024-12-06 13:31:01.201928] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:14.516 [2024-12-06 13:31:01.357060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.357176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:14.516 [2024-12-06 13:31:01.357212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:14.516 [2024-12-06 13:31:01.357225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.357325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.357346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:14.516 [2024-12-06 13:31:01.357360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:36:14.516 [2024-12-06 13:31:01.357373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.357418] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:14.516 [2024-12-06 13:31:01.358518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:14.516 [2024-12-06 13:31:01.358558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.358574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:14.516 [2024-12-06 13:31:01.358594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.157 ms 00:36:14.516 [2024-12-06 13:31:01.358616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.361220] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:14.516 [2024-12-06 13:31:01.379376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.379493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:14.516 [2024-12-06 13:31:01.379546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.150 ms 00:36:14.516 [2024-12-06 13:31:01.379562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.379703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.379725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:14.516 [2024-12-06 13:31:01.379740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:36:14.516 [2024-12-06 13:31:01.379753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.393653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.394005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:14.516 [2024-12-06 13:31:01.394051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.736 ms 00:36:14.516 [2024-12-06 13:31:01.394065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.394262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.394285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:14.516 [2024-12-06 13:31:01.394300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:36:14.516 [2024-12-06 13:31:01.394314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.394471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.394499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:14.516 [2024-12-06 13:31:01.394514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:14.516 [2024-12-06 13:31:01.394527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.394578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:14.516 [2024-12-06 13:31:01.400674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.400901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:14.516 [2024-12-06 13:31:01.400931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.115 ms 00:36:14.516 [2024-12-06 13:31:01.400953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.401020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.401038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:14.516 [2024-12-06 13:31:01.401053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:14.516 [2024-12-06 13:31:01.401065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.401151] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:14.516 [2024-12-06 13:31:01.401203] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:14.516 [2024-12-06 13:31:01.401251] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:14.516 [2024-12-06 13:31:01.401275] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:14.516 [2024-12-06 13:31:01.401390] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:14.516 [2024-12-06 13:31:01.401407] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:14.516 [2024-12-06 13:31:01.401423] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:14.516 [2024-12-06 13:31:01.401438] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:14.516 [2024-12-06 13:31:01.401465] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:14.516 [2024-12-06 13:31:01.401484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:14.516 [2024-12-06 13:31:01.401496] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:14.516 [2024-12-06 13:31:01.401508] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:14.516 [2024-12-06 13:31:01.401520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:14.516 [2024-12-06 13:31:01.401532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.401544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:14.516 [2024-12-06 13:31:01.401557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:36:14.516 [2024-12-06 13:31:01.401568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.401690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.516 [2024-12-06 13:31:01.401717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:14.516 [2024-12-06 13:31:01.401736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:36:14.516 [2024-12-06 13:31:01.401749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.516 [2024-12-06 13:31:01.401872] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:14.516 [2024-12-06 13:31:01.401889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:14.516 [2024-12-06 13:31:01.401903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:14.516 [2024-12-06 13:31:01.401917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.516 [2024-12-06 13:31:01.401929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:14.516 [2024-12-06 13:31:01.401941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:14.516 [2024-12-06 13:31:01.401952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:14.516 [2024-12-06 13:31:01.401964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:14.516 [2024-12-06 13:31:01.401977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:14.516 [2024-12-06 13:31:01.401989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.516 [2024-12-06 13:31:01.402001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:14.516 [2024-12-06 13:31:01.402012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:14.516 [2024-12-06 13:31:01.402022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.516 [2024-12-06 13:31:01.402035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:14.516 [2024-12-06 13:31:01.402046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:14.516 [2024-12-06 13:31:01.402058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.516 [2024-12-06 13:31:01.402085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:14.516 [2024-12-06 13:31:01.402096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:14.517 [2024-12-06 13:31:01.402107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.517 [2024-12-06 13:31:01.402119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:14.517 [2024-12-06 13:31:01.402129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:14.517 [2024-12-06 13:31:01.402140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:14.517 [2024-12-06 13:31:01.402151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:14.517 [2024-12-06 13:31:01.402196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:14.517 [2024-12-06 13:31:01.402208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:14.517 [2024-12-06 13:31:01.402219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:14.517 [2024-12-06 13:31:01.402230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:14.517 [2024-12-06 13:31:01.402241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:14.517 [2024-12-06 13:31:01.402252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:14.517 [2024-12-06 13:31:01.402263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:14.517 [2024-12-06 13:31:01.402274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:14.517 [2024-12-06 13:31:01.402285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:14.517 [2024-12-06 13:31:01.402296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:14.517 [2024-12-06 13:31:01.402307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.517 [2024-12-06 13:31:01.402319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:14.517 [2024-12-06 13:31:01.402340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:14.517 [2024-12-06 13:31:01.402369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.517 [2024-12-06 13:31:01.402380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:14.517 [2024-12-06 13:31:01.402392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:14.517 [2024-12-06 13:31:01.402403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.517 [2024-12-06 13:31:01.402415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:14.517 [2024-12-06 13:31:01.402427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:14.517 [2024-12-06 13:31:01.402438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.517 [2024-12-06 13:31:01.402449] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:14.517 [2024-12-06 13:31:01.402463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:14.517 [2024-12-06 13:31:01.402486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:14.517 [2024-12-06 13:31:01.402499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:14.517 [2024-12-06 13:31:01.402612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:14.517 [2024-12-06 13:31:01.402624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:14.517 [2024-12-06 13:31:01.402636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:14.517 [2024-12-06 13:31:01.402648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:14.517 [2024-12-06 13:31:01.402660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:14.517 [2024-12-06 13:31:01.402674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:14.517 [2024-12-06 13:31:01.402687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:14.517 [2024-12-06 13:31:01.402702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:14.517 [2024-12-06 13:31:01.402728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:14.517 [2024-12-06 13:31:01.402765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:14.517 [2024-12-06 13:31:01.402777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:14.517 [2024-12-06 13:31:01.402788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:14.517 [2024-12-06 13:31:01.402800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:14.517 [2024-12-06 13:31:01.402897] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:14.517 [2024-12-06 13:31:01.402911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:14.517 [2024-12-06 13:31:01.402939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:14.517 [2024-12-06 13:31:01.402951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:14.517 [2024-12-06 13:31:01.402963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:14.517 [2024-12-06 13:31:01.402991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.517 [2024-12-06 13:31:01.403004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:14.517 [2024-12-06 13:31:01.403022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.188 ms 00:36:14.517 [2024-12-06 13:31:01.403034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.517 [2024-12-06 13:31:01.403104] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:14.517 [2024-12-06 13:31:01.403122] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:36:17.804 [2024-12-06 13:31:04.299841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.804 [2024-12-06 13:31:04.300202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:17.804 [2024-12-06 13:31:04.300241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2896.751 ms 00:36:17.804 [2024-12-06 13:31:04.300256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.804 [2024-12-06 13:31:04.347279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.804 [2024-12-06 13:31:04.347369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:17.804 [2024-12-06 13:31:04.347393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.669 ms 00:36:17.804 [2024-12-06 13:31:04.347408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.804 [2024-12-06 13:31:04.347597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.804 [2024-12-06 13:31:04.347643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:17.804 [2024-12-06 13:31:04.347658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:36:17.804 [2024-12-06 13:31:04.347670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.804 [2024-12-06 13:31:04.399442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.804 [2024-12-06 13:31:04.399788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:17.804 [2024-12-06 13:31:04.399831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.706 ms 00:36:17.804 [2024-12-06 13:31:04.399847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.804 [2024-12-06 13:31:04.399948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.804 [2024-12-06 13:31:04.399966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:17.805 [2024-12-06 13:31:04.399980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:17.805 [2024-12-06 13:31:04.399993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.400916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.400938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:17.805 [2024-12-06 13:31:04.400953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.791 ms 00:36:17.805 [2024-12-06 13:31:04.400967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.401046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.401062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:17.805 [2024-12-06 13:31:04.401076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:36:17.805 [2024-12-06 13:31:04.401088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.426130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.426488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:17.805 [2024-12-06 13:31:04.426523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.998 ms 00:36:17.805 [2024-12-06 13:31:04.426538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.460365] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:36:17.805 [2024-12-06 13:31:04.460448] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:17.805 [2024-12-06 13:31:04.460473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.460487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:36:17.805 [2024-12-06 13:31:04.460505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.674 ms 00:36:17.805 [2024-12-06 13:31:04.460516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.478649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.478937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:36:17.805 [2024-12-06 13:31:04.478971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.053 ms 00:36:17.805 [2024-12-06 13:31:04.478986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.494579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.494636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:36:17.805 [2024-12-06 13:31:04.494658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.515 ms 00:36:17.805 [2024-12-06 13:31:04.494670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.510036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.510316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:36:17.805 [2024-12-06 13:31:04.510363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.297 ms 00:36:17.805 [2024-12-06 13:31:04.510379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.511522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.511551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:17.805 [2024-12-06 13:31:04.511567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.933 ms 00:36:17.805 [2024-12-06 13:31:04.511585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.605578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.605733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:17.805 [2024-12-06 13:31:04.605763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.958 ms 00:36:17.805 [2024-12-06 13:31:04.605777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.619514] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:17.805 [2024-12-06 13:31:04.621430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.621469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:17.805 [2024-12-06 13:31:04.621490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.547 ms 00:36:17.805 [2024-12-06 13:31:04.621503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.621666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.621691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:36:17.805 [2024-12-06 13:31:04.621706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:36:17.805 [2024-12-06 13:31:04.621718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.621845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.621867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:17.805 [2024-12-06 13:31:04.621881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:36:17.805 [2024-12-06 13:31:04.621894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.621935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.621951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:17.805 [2024-12-06 13:31:04.621971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:17.805 [2024-12-06 13:31:04.621983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.622071] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:17.805 [2024-12-06 13:31:04.622097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.622109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:17.805 [2024-12-06 13:31:04.622123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:36:17.805 [2024-12-06 13:31:04.622159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.655415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.655506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:17.805 [2024-12-06 13:31:04.655528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.212 ms 00:36:17.805 [2024-12-06 13:31:04.655553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.655668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:17.805 [2024-12-06 13:31:04.655688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:17.805 [2024-12-06 13:31:04.655704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:36:17.805 [2024-12-06 13:31:04.655717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:17.805 [2024-12-06 13:31:04.657563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3299.772 ms, result 0 00:36:17.805 [2024-12-06 13:31:04.671959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.805 [2024-12-06 13:31:04.687965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:17.805 [2024-12-06 13:31:04.698857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:17.805 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.805 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:17.805 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:17.805 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:17.805 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:18.063 [2024-12-06 13:31:05.027005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:18.063 [2024-12-06 13:31:05.027429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:18.063 [2024-12-06 13:31:05.027474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:36:18.063 [2024-12-06 13:31:05.027490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:18.063 [2024-12-06 13:31:05.027547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:18.063 [2024-12-06 13:31:05.027564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:18.063 [2024-12-06 13:31:05.027578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:18.063 [2024-12-06 13:31:05.027590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:18.063 [2024-12-06 13:31:05.027619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:18.063 [2024-12-06 13:31:05.027634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:18.063 [2024-12-06 13:31:05.027647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:18.063 [2024-12-06 13:31:05.027659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:18.063 [2024-12-06 13:31:05.027784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.737 ms, result 0 00:36:18.063 true 00:36:18.063 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:18.320 { 00:36:18.320 "name": "ftl", 00:36:18.320 "properties": [ 00:36:18.320 { 00:36:18.320 "name": "superblock_version", 00:36:18.320 "value": 5, 00:36:18.320 "read-only": true 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "name": "base_device", 00:36:18.320 "bands": [ 00:36:18.320 { 00:36:18.320 "id": 0, 00:36:18.320 "state": "CLOSED", 00:36:18.320 "validity": 1.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 1, 00:36:18.320 "state": "CLOSED", 00:36:18.320 "validity": 1.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 2, 00:36:18.320 "state": "CLOSED", 00:36:18.320 "validity": 0.007843137254901933 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 3, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 4, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 5, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 6, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 7, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 8, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 9, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 10, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 11, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 12, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 13, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 14, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 15, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 16, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 17, 00:36:18.320 "state": "FREE", 00:36:18.320 "validity": 0.0 00:36:18.320 } 00:36:18.320 ], 00:36:18.320 "read-only": true 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "name": "cache_device", 00:36:18.320 "type": "bdev", 00:36:18.320 "chunks": [ 00:36:18.320 { 00:36:18.320 "id": 0, 00:36:18.320 "state": "INACTIVE", 00:36:18.320 "utilization": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 1, 00:36:18.320 "state": "OPEN", 00:36:18.320 "utilization": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 2, 00:36:18.320 "state": "OPEN", 00:36:18.320 "utilization": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 3, 00:36:18.320 "state": "FREE", 00:36:18.320 "utilization": 0.0 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "id": 4, 00:36:18.320 "state": "FREE", 00:36:18.320 "utilization": 0.0 00:36:18.320 } 00:36:18.320 ], 00:36:18.320 "read-only": true 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "name": "verbose_mode", 00:36:18.320 "value": true, 00:36:18.320 "unit": "", 00:36:18.320 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:18.320 }, 00:36:18.320 { 00:36:18.320 "name": "prep_upgrade_on_shutdown", 00:36:18.320 "value": false, 00:36:18.320 "unit": "", 00:36:18.320 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:18.320 } 00:36:18.320 ] 00:36:18.320 } 00:36:18.577 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:36:18.577 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:36:18.577 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:18.834 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:36:18.834 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:36:18.834 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:36:18.834 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:18.834 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:36:19.092 Validate MD5 checksum, iteration 1 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:19.092 13:31:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:19.092 [2024-12-06 13:31:05.986896] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:36:19.092 [2024-12-06 13:31:05.988286] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84489 ] 00:36:19.350 [2024-12-06 13:31:06.173780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.350 [2024-12-06 13:31:06.317399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.253  [2024-12-06T13:31:09.205Z] Copying: 422/1024 [MB] (422 MBps) [2024-12-06T13:31:09.463Z] Copying: 845/1024 [MB] (423 MBps) [2024-12-06T13:31:10.840Z] Copying: 1024/1024 [MB] (average 422 MBps) 00:36:23.824 00:36:23.824 13:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:23.824 13:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:26.358 Validate MD5 checksum, iteration 2 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=829ed6acb6522a009413f8575e825889 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 829ed6acb6522a009413f8575e825889 != \8\2\9\e\d\6\a\c\b\6\5\2\2\a\0\0\9\4\1\3\f\8\5\7\5\e\8\2\5\8\8\9 ]] 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:26.358 13:31:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:26.358 [2024-12-06 13:31:12.873681] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:36:26.358 [2024-12-06 13:31:12.874079] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84561 ] 00:36:26.358 [2024-12-06 13:31:13.056199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.358 [2024-12-06 13:31:13.209450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.261  [2024-12-06T13:31:15.844Z] Copying: 422/1024 [MB] (422 MBps) [2024-12-06T13:31:16.411Z] Copying: 843/1024 [MB] (421 MBps) [2024-12-06T13:31:18.314Z] Copying: 1024/1024 [MB] (average 423 MBps) 00:36:31.298 00:36:31.298 13:31:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:31.298 13:31:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ea9a5213d1066c03ae2e61961192ce08 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ea9a5213d1066c03ae2e61961192ce08 != \e\a\9\a\5\2\1\3\d\1\0\6\6\c\0\3\a\e\2\e\6\1\9\6\1\1\9\2\c\e\0\8 ]] 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84411 ]] 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84411 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84636 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84636 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84636 ']' 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:33.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:33.203 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:33.462 [2024-12-06 13:31:20.329847] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:36:33.462 [2024-12-06 13:31:20.330026] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84636 ] 00:36:33.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84411 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:36:33.721 [2024-12-06 13:31:20.517796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.721 [2024-12-06 13:31:20.667891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.102 [2024-12-06 13:31:21.733861] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:35.102 [2024-12-06 13:31:21.734001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:35.102 [2024-12-06 13:31:21.887195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.887263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:35.102 [2024-12-06 13:31:21.887284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:35.102 [2024-12-06 13:31:21.887295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.887365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.887383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:35.102 [2024-12-06 13:31:21.887395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:36:35.102 [2024-12-06 13:31:21.887406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.887453] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:35.102 [2024-12-06 13:31:21.888373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:35.102 [2024-12-06 13:31:21.888399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.888410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:35.102 [2024-12-06 13:31:21.888422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.961 ms 00:36:35.102 [2024-12-06 13:31:21.888440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.888969] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:35.102 [2024-12-06 13:31:21.912228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.912268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:35.102 [2024-12-06 13:31:21.912285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.260 ms 00:36:35.102 [2024-12-06 13:31:21.912296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.923577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.923882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:35.102 [2024-12-06 13:31:21.923910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:36:35.102 [2024-12-06 13:31:21.923925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.924513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.924538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:35.102 [2024-12-06 13:31:21.924552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:36:35.102 [2024-12-06 13:31:21.924563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.924672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.924691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:35.102 [2024-12-06 13:31:21.924705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:36:35.102 [2024-12-06 13:31:21.924717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.924756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.924771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:35.102 [2024-12-06 13:31:21.924785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:35.102 [2024-12-06 13:31:21.924807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.924841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:35.102 [2024-12-06 13:31:21.928489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.928521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:35.102 [2024-12-06 13:31:21.928536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.656 ms 00:36:35.102 [2024-12-06 13:31:21.928546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.928583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.928606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:35.102 [2024-12-06 13:31:21.928634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:35.102 [2024-12-06 13:31:21.928662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.928710] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:35.102 [2024-12-06 13:31:21.928758] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:35.102 [2024-12-06 13:31:21.928801] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:35.102 [2024-12-06 13:31:21.928828] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:35.102 [2024-12-06 13:31:21.928941] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:35.102 [2024-12-06 13:31:21.928958] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:35.102 [2024-12-06 13:31:21.928988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:35.102 [2024-12-06 13:31:21.929017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:35.102 [2024-12-06 13:31:21.929031] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:35.102 [2024-12-06 13:31:21.929064] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:35.102 [2024-12-06 13:31:21.929075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:35.102 [2024-12-06 13:31:21.929086] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:35.102 [2024-12-06 13:31:21.929113] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:35.102 [2024-12-06 13:31:21.929129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.929140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:35.102 [2024-12-06 13:31:21.929152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.423 ms 00:36:35.102 [2024-12-06 13:31:21.929162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.929280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.102 [2024-12-06 13:31:21.929298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:35.102 [2024-12-06 13:31:21.929311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.092 ms 00:36:35.102 [2024-12-06 13:31:21.929322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.102 [2024-12-06 13:31:21.929427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:35.102 [2024-12-06 13:31:21.929471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:35.102 [2024-12-06 13:31:21.929483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:35.102 [2024-12-06 13:31:21.929495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.102 [2024-12-06 13:31:21.929507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:35.102 [2024-12-06 13:31:21.929518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:35.102 [2024-12-06 13:31:21.929528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:35.102 [2024-12-06 13:31:21.929538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:35.102 [2024-12-06 13:31:21.929549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:35.102 [2024-12-06 13:31:21.929558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.102 [2024-12-06 13:31:21.929569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:35.102 [2024-12-06 13:31:21.929584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:35.102 [2024-12-06 13:31:21.929602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.102 [2024-12-06 13:31:21.929612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:35.102 [2024-12-06 13:31:21.929640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:35.102 [2024-12-06 13:31:21.929651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.102 [2024-12-06 13:31:21.929662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:35.102 [2024-12-06 13:31:21.929688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:35.102 [2024-12-06 13:31:21.929699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.102 [2024-12-06 13:31:21.929711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:35.102 [2024-12-06 13:31:21.929721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:35.102 [2024-12-06 13:31:21.929745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:35.102 [2024-12-06 13:31:21.929756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:35.102 [2024-12-06 13:31:21.929768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:35.102 [2024-12-06 13:31:21.929779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:35.102 [2024-12-06 13:31:21.929790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:35.102 [2024-12-06 13:31:21.929801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:35.102 [2024-12-06 13:31:21.929812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:35.102 [2024-12-06 13:31:21.929822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:35.102 [2024-12-06 13:31:21.929834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:35.102 [2024-12-06 13:31:21.929844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:35.102 [2024-12-06 13:31:21.929855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:35.102 [2024-12-06 13:31:21.929866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:35.102 [2024-12-06 13:31:21.929877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.102 [2024-12-06 13:31:21.929888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:35.103 [2024-12-06 13:31:21.929900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:35.103 [2024-12-06 13:31:21.929911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.103 [2024-12-06 13:31:21.929922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:35.103 [2024-12-06 13:31:21.929933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:35.103 [2024-12-06 13:31:21.929943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.103 [2024-12-06 13:31:21.929969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:35.103 [2024-12-06 13:31:21.929980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:35.103 [2024-12-06 13:31:21.930006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.103 [2024-12-06 13:31:21.930031] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:35.103 [2024-12-06 13:31:21.930058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:35.103 [2024-12-06 13:31:21.930068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:35.103 [2024-12-06 13:31:21.930079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:35.103 [2024-12-06 13:31:21.930090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:35.103 [2024-12-06 13:31:21.930101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:35.103 [2024-12-06 13:31:21.930110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:35.103 [2024-12-06 13:31:21.930121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:35.103 [2024-12-06 13:31:21.930131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:35.103 [2024-12-06 13:31:21.930141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:35.103 [2024-12-06 13:31:21.930153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:35.103 [2024-12-06 13:31:21.930166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:35.103 [2024-12-06 13:31:21.930189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:35.103 [2024-12-06 13:31:21.930221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:35.103 [2024-12-06 13:31:21.930231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:35.103 [2024-12-06 13:31:21.930255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:35.103 [2024-12-06 13:31:21.930269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:35.103 [2024-12-06 13:31:21.930372] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:35.103 [2024-12-06 13:31:21.930385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:35.103 [2024-12-06 13:31:21.930418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:35.103 [2024-12-06 13:31:21.930430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:35.103 [2024-12-06 13:31:21.930442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:35.103 [2024-12-06 13:31:21.930456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:21.930469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:35.103 [2024-12-06 13:31:21.930482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.087 ms 00:36:35.103 [2024-12-06 13:31:21.930494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:21.969226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:21.969296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:35.103 [2024-12-06 13:31:21.969314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.658 ms 00:36:35.103 [2024-12-06 13:31:21.969326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:21.969395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:21.969409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:35.103 [2024-12-06 13:31:21.969423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:35.103 [2024-12-06 13:31:21.969434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.019663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.019740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:35.103 [2024-12-06 13:31:22.019760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.129 ms 00:36:35.103 [2024-12-06 13:31:22.019776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.019864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.019881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:35.103 [2024-12-06 13:31:22.019896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:35.103 [2024-12-06 13:31:22.019913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.020129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.020147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:35.103 [2024-12-06 13:31:22.020199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:36:35.103 [2024-12-06 13:31:22.020213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.020284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.020307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:35.103 [2024-12-06 13:31:22.020320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:36:35.103 [2024-12-06 13:31:22.020332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.045045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.045111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:35.103 [2024-12-06 13:31:22.045146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.662 ms 00:36:35.103 [2024-12-06 13:31:22.045182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.045360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.045389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:36:35.103 [2024-12-06 13:31:22.045404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:36:35.103 [2024-12-06 13:31:22.045416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.081236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.081278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:36:35.103 [2024-12-06 13:31:22.081295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.775 ms 00:36:35.103 [2024-12-06 13:31:22.081307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.103 [2024-12-06 13:31:22.093072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.103 [2024-12-06 13:31:22.093352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:35.103 [2024-12-06 13:31:22.093391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:36:35.103 [2024-12-06 13:31:22.093421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.362 [2024-12-06 13:31:22.179875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.362 [2024-12-06 13:31:22.180253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:35.362 [2024-12-06 13:31:22.180300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.372 ms 00:36:35.362 [2024-12-06 13:31:22.180315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.362 [2024-12-06 13:31:22.180583] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:36:35.362 [2024-12-06 13:31:22.180813] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:36:35.362 [2024-12-06 13:31:22.180989] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:36:35.362 [2024-12-06 13:31:22.181189] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:36:35.362 [2024-12-06 13:31:22.181232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.362 [2024-12-06 13:31:22.181263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:36:35.362 [2024-12-06 13:31:22.181276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:36:35.362 [2024-12-06 13:31:22.181288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.362 [2024-12-06 13:31:22.181397] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:36:35.362 [2024-12-06 13:31:22.181418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.362 [2024-12-06 13:31:22.181436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:36:35.362 [2024-12-06 13:31:22.181449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:36:35.362 [2024-12-06 13:31:22.181461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.362 [2024-12-06 13:31:22.201079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.363 [2024-12-06 13:31:22.201189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:36:35.363 [2024-12-06 13:31:22.201210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.587 ms 00:36:35.363 [2024-12-06 13:31:22.201223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.363 [2024-12-06 13:31:22.213037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.363 [2024-12-06 13:31:22.213074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:36:35.363 [2024-12-06 13:31:22.213100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:36:35.363 [2024-12-06 13:31:22.213112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.363 [2024-12-06 13:31:22.213266] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:36:35.363 [2024-12-06 13:31:22.213611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.363 [2024-12-06 13:31:22.213650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:35.363 [2024-12-06 13:31:22.213671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:36:35.363 [2024-12-06 13:31:22.213685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.929 [2024-12-06 13:31:22.851803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.929 [2024-12-06 13:31:22.851898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:35.929 [2024-12-06 13:31:22.851924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 636.967 ms 00:36:35.929 [2024-12-06 13:31:22.851942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.929 [2024-12-06 13:31:22.857984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.929 [2024-12-06 13:31:22.858190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:35.929 [2024-12-06 13:31:22.858224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.304 ms 00:36:35.930 [2024-12-06 13:31:22.858241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.930 [2024-12-06 13:31:22.858826] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:36:35.930 [2024-12-06 13:31:22.858865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.930 [2024-12-06 13:31:22.858880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:35.930 [2024-12-06 13:31:22.858894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:36:35.930 [2024-12-06 13:31:22.858907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.930 [2024-12-06 13:31:22.858993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.930 [2024-12-06 13:31:22.859011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:35.930 [2024-12-06 13:31:22.859023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:35.930 [2024-12-06 13:31:22.859042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.930 [2024-12-06 13:31:22.859120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 645.859 ms, result 0 00:36:35.930 [2024-12-06 13:31:22.859198] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:36:35.930 [2024-12-06 13:31:22.859499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:35.930 [2024-12-06 13:31:22.859681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:35.930 [2024-12-06 13:31:22.859706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:36:35.930 [2024-12-06 13:31:22.859720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.497 [2024-12-06 13:31:23.504414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.497 [2024-12-06 13:31:23.504506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:36.497 [2024-12-06 13:31:23.504548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 643.520 ms 00:36:36.497 [2024-12-06 13:31:23.504560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.497 [2024-12-06 13:31:23.510498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.497 [2024-12-06 13:31:23.510783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:36.497 [2024-12-06 13:31:23.510826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.573 ms 00:36:36.497 [2024-12-06 13:31:23.510839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.511401] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:36:36.756 [2024-12-06 13:31:23.511436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.511451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:36.756 [2024-12-06 13:31:23.511465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:36:36.756 [2024-12-06 13:31:23.511477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.511534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.511566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:36.756 [2024-12-06 13:31:23.511596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:36.756 [2024-12-06 13:31:23.511606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.511694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 652.513 ms, result 0 00:36:36.756 [2024-12-06 13:31:23.511756] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:36.756 [2024-12-06 13:31:23.511772] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:36.756 [2024-12-06 13:31:23.511787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.511799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:36:36.756 [2024-12-06 13:31:23.511812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1298.561 ms 00:36:36.756 [2024-12-06 13:31:23.511824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.511860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.511882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:36:36.756 [2024-12-06 13:31:23.511894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:36:36.756 [2024-12-06 13:31:23.511906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.525573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:36.756 [2024-12-06 13:31:23.525906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.525932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:36.756 [2024-12-06 13:31:23.525963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.965 ms 00:36:36.756 [2024-12-06 13:31:23.525989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.526879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.526946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:36:36.756 [2024-12-06 13:31:23.526981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.727 ms 00:36:36.756 [2024-12-06 13:31:23.526992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.529411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.529437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:36:36.756 [2024-12-06 13:31:23.529450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.389 ms 00:36:36.756 [2024-12-06 13:31:23.529462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.529505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.529520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:36:36.756 [2024-12-06 13:31:23.529533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:36.756 [2024-12-06 13:31:23.529550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.529698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.529716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:36.756 [2024-12-06 13:31:23.529729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:36:36.756 [2024-12-06 13:31:23.529740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.529770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.529783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:36.756 [2024-12-06 13:31:23.529796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:36:36.756 [2024-12-06 13:31:23.529808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.529858] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:36.756 [2024-12-06 13:31:23.529874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.529886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:36.756 [2024-12-06 13:31:23.529899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:36:36.756 [2024-12-06 13:31:23.529911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.530017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:36.756 [2024-12-06 13:31:23.530031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:36.756 [2024-12-06 13:31:23.530043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:36:36.756 [2024-12-06 13:31:23.530054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:36.756 [2024-12-06 13:31:23.531739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1643.984 ms, result 0 00:36:36.756 [2024-12-06 13:31:23.546994] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:36.756 [2024-12-06 13:31:23.563002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:36.756 [2024-12-06 13:31:23.573463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:36.756 Validate MD5 checksum, iteration 1 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:36.756 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:36.756 [2024-12-06 13:31:23.731962] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:36:36.756 [2024-12-06 13:31:23.732357] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84675 ] 00:36:37.014 [2024-12-06 13:31:23.923205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.272 [2024-12-06 13:31:24.077325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.170  [2024-12-06T13:31:26.756Z] Copying: 431/1024 [MB] (431 MBps) [2024-12-06T13:31:27.388Z] Copying: 868/1024 [MB] (437 MBps) [2024-12-06T13:31:28.766Z] Copying: 1024/1024 [MB] (average 432 MBps) 00:36:41.750 00:36:41.750 13:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:41.750 13:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:43.655 Validate MD5 checksum, iteration 2 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=829ed6acb6522a009413f8575e825889 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 829ed6acb6522a009413f8575e825889 != \8\2\9\e\d\6\a\c\b\6\5\2\2\a\0\0\9\4\1\3\f\8\5\7\5\e\8\2\5\8\8\9 ]] 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:43.655 13:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:43.655 [2024-12-06 13:31:30.530290] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:36:43.655 [2024-12-06 13:31:30.530510] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84744 ] 00:36:43.914 [2024-12-06 13:31:30.722433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.914 [2024-12-06 13:31:30.905226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.815  [2024-12-06T13:31:33.768Z] Copying: 411/1024 [MB] (411 MBps) [2024-12-06T13:31:34.334Z] Copying: 822/1024 [MB] (411 MBps) [2024-12-06T13:31:35.710Z] Copying: 1024/1024 [MB] (average 411 MBps) 00:36:48.694 00:36:48.694 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:48.694 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ea9a5213d1066c03ae2e61961192ce08 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ea9a5213d1066c03ae2e61961192ce08 != \e\a\9\a\5\2\1\3\d\1\0\6\6\c\0\3\a\e\2\e\6\1\9\6\1\1\9\2\c\e\0\8 ]] 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84636 ]] 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84636 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84636 ']' 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84636 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84636 00:36:50.596 killing process with pid 84636 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:50.596 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:50.597 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84636' 00:36:50.597 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84636 00:36:50.597 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84636 00:36:51.974 [2024-12-06 13:31:38.636745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:51.974 [2024-12-06 13:31:38.651735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.974 [2024-12-06 13:31:38.651782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:51.974 [2024-12-06 13:31:38.651803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:51.974 [2024-12-06 13:31:38.651814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.974 [2024-12-06 13:31:38.651844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:51.974 [2024-12-06 13:31:38.655558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.974 [2024-12-06 13:31:38.655592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:51.974 [2024-12-06 13:31:38.655606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.695 ms 00:36:51.974 [2024-12-06 13:31:38.655618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.974 [2024-12-06 13:31:38.655857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.974 [2024-12-06 13:31:38.655876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:51.974 [2024-12-06 13:31:38.655889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:36:51.974 [2024-12-06 13:31:38.655900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.974 [2024-12-06 13:31:38.657235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.974 [2024-12-06 13:31:38.657273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:51.974 [2024-12-06 13:31:38.657289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.315 ms 00:36:51.974 [2024-12-06 13:31:38.657307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.974 [2024-12-06 13:31:38.658419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.658754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:51.975 [2024-12-06 13:31:38.658781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.070 ms 00:36:51.975 [2024-12-06 13:31:38.658795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.669645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.669830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:51.975 [2024-12-06 13:31:38.669864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.788 ms 00:36:51.975 [2024-12-06 13:31:38.669877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.675960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.675998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:51.975 [2024-12-06 13:31:38.676014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.037 ms 00:36:51.975 [2024-12-06 13:31:38.676025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.676101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.676119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:51.975 [2024-12-06 13:31:38.676147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:36:51.975 [2024-12-06 13:31:38.676166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.686569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.686755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:51.975 [2024-12-06 13:31:38.686780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.382 ms 00:36:51.975 [2024-12-06 13:31:38.686792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.698204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.698262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:51.975 [2024-12-06 13:31:38.698278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.354 ms 00:36:51.975 [2024-12-06 13:31:38.698305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.709743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.709777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:51.975 [2024-12-06 13:31:38.709797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.379 ms 00:36:51.975 [2024-12-06 13:31:38.709807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.720715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.720748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:51.975 [2024-12-06 13:31:38.720762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.842 ms 00:36:51.975 [2024-12-06 13:31:38.720773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.720810] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:51.975 [2024-12-06 13:31:38.720833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:51.975 [2024-12-06 13:31:38.720846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:51.975 [2024-12-06 13:31:38.720858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:51.975 [2024-12-06 13:31:38.720869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.720999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.721010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.721022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.721033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:51.975 [2024-12-06 13:31:38.721046] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:51.975 [2024-12-06 13:31:38.721057] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ed9de9cd-2edd-4475-875f-95f7606860db 00:36:51.975 [2024-12-06 13:31:38.721069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:51.975 [2024-12-06 13:31:38.721081] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:36:51.975 [2024-12-06 13:31:38.721091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:36:51.975 [2024-12-06 13:31:38.721103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:36:51.975 [2024-12-06 13:31:38.721113] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:51.975 [2024-12-06 13:31:38.721124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:51.975 [2024-12-06 13:31:38.721172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:51.975 [2024-12-06 13:31:38.721184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:51.975 [2024-12-06 13:31:38.721194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:51.975 [2024-12-06 13:31:38.721207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.721218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:51.975 [2024-12-06 13:31:38.721231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:36:51.975 [2024-12-06 13:31:38.721258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.738218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.738253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:51.975 [2024-12-06 13:31:38.738269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.936 ms 00:36:51.975 [2024-12-06 13:31:38.738281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.738858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.975 [2024-12-06 13:31:38.738902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:51.975 [2024-12-06 13:31:38.738917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:36:51.975 [2024-12-06 13:31:38.738929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.797443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:51.975 [2024-12-06 13:31:38.797726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:51.975 [2024-12-06 13:31:38.797756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:51.975 [2024-12-06 13:31:38.797782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.797852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:51.975 [2024-12-06 13:31:38.797868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:51.975 [2024-12-06 13:31:38.797882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:51.975 [2024-12-06 13:31:38.797894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.798031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:51.975 [2024-12-06 13:31:38.798052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:51.975 [2024-12-06 13:31:38.798067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:51.975 [2024-12-06 13:31:38.798079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.798115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:51.975 [2024-12-06 13:31:38.798163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:51.975 [2024-12-06 13:31:38.798180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:51.975 [2024-12-06 13:31:38.798193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.975 [2024-12-06 13:31:38.906825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:51.975 [2024-12-06 13:31:38.906956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:51.975 [2024-12-06 13:31:38.906975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:51.975 [2024-12-06 13:31:38.906988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.234 [2024-12-06 13:31:38.992202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:52.234 [2024-12-06 13:31:38.992343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:52.234 [2024-12-06 13:31:38.992374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:52.234 [2024-12-06 13:31:38.992387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.234 [2024-12-06 13:31:38.992563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:52.234 [2024-12-06 13:31:38.992607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:52.234 [2024-12-06 13:31:38.992635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:52.234 [2024-12-06 13:31:38.992662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.234 [2024-12-06 13:31:38.992763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:52.234 [2024-12-06 13:31:38.992802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:52.234 [2024-12-06 13:31:38.992817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:52.234 [2024-12-06 13:31:38.992829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.234 [2024-12-06 13:31:38.992976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:52.234 [2024-12-06 13:31:38.993001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:52.234 [2024-12-06 13:31:38.993022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:52.234 [2024-12-06 13:31:38.993034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.234 [2024-12-06 13:31:38.993116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:52.235 [2024-12-06 13:31:38.993136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:52.235 [2024-12-06 13:31:38.993155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:52.235 [2024-12-06 13:31:38.993176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.235 [2024-12-06 13:31:38.993302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:52.235 [2024-12-06 13:31:38.993320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:52.235 [2024-12-06 13:31:38.993332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:52.235 [2024-12-06 13:31:38.993343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.235 [2024-12-06 13:31:38.993403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:52.235 [2024-12-06 13:31:38.993427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:52.235 [2024-12-06 13:31:38.993450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:52.235 [2024-12-06 13:31:38.993462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.235 [2024-12-06 13:31:38.993687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 341.857 ms, result 0 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:53.609 Remove shared memory files 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84411 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:53.609 ************************************ 00:36:53.609 END TEST ftl_upgrade_shutdown 00:36:53.609 ************************************ 00:36:53.609 00:36:53.609 real 1m39.753s 00:36:53.609 user 2m17.583s 00:36:53.609 sys 0m27.711s 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.609 13:31:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:53.609 13:31:40 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:36:53.609 13:31:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:53.609 Process with pid 76836 is not found 00:36:53.609 13:31:40 ftl -- ftl/ftl.sh@14 -- # killprocess 76836 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 76836 ']' 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@958 -- # kill -0 76836 00:36:53.609 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76836) - No such process 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76836 is not found' 00:36:53.609 13:31:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:53.609 13:31:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84881 00:36:53.609 13:31:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84881 00:36:53.609 13:31:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@835 -- # '[' -z 84881 ']' 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.609 13:31:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:53.609 [2024-12-06 13:31:40.473002] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:36:53.609 [2024-12-06 13:31:40.473455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84881 ] 00:36:53.866 [2024-12-06 13:31:40.672264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.866 [2024-12-06 13:31:40.851833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.237 13:31:41 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:55.237 13:31:41 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:55.237 13:31:41 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:55.237 nvme0n1 00:36:55.237 13:31:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:55.495 13:31:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:55.495 13:31:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:55.753 13:31:42 ftl -- ftl/common.sh@28 -- # stores=06855fda-e6c6-4c50-81bb-23b8361efe1f 00:36:55.753 13:31:42 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:55.753 13:31:42 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06855fda-e6c6-4c50-81bb-23b8361efe1f 00:36:56.022 13:31:42 ftl -- ftl/ftl.sh@23 -- # killprocess 84881 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 84881 ']' 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@958 -- # kill -0 84881 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@959 -- # uname 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84881 00:36:56.022 killing process with pid 84881 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84881' 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@973 -- # kill 84881 00:36:56.022 13:31:42 ftl -- common/autotest_common.sh@978 -- # wait 84881 00:36:58.618 13:31:45 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:58.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:58.618 Waiting for block devices as requested 00:36:58.876 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:58.876 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:58.876 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:59.133 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:37:04.397 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:37:04.397 Remove shared memory files 00:37:04.397 13:31:50 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:37:04.397 13:31:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:04.397 13:31:51 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:37:04.397 13:31:51 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:37:04.397 13:31:51 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:37:04.397 13:31:51 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:04.397 13:31:51 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:37:04.397 ************************************ 00:37:04.397 END TEST ftl 00:37:04.397 ************************************ 00:37:04.397 00:37:04.397 real 12m18.284s 00:37:04.397 user 15m26.696s 00:37:04.397 sys 1m41.255s 00:37:04.397 13:31:51 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.397 13:31:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:04.397 13:31:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:04.397 13:31:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:04.397 13:31:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:04.397 13:31:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:04.397 13:31:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:04.397 13:31:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:04.397 13:31:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:04.397 13:31:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:04.397 13:31:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:04.397 13:31:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:04.397 13:31:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:04.397 13:31:51 -- common/autotest_common.sh@10 -- # set +x 00:37:04.397 13:31:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:04.397 13:31:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:04.397 13:31:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:04.397 13:31:51 -- common/autotest_common.sh@10 -- # set +x 00:37:06.308 INFO: APP EXITING 00:37:06.308 INFO: killing all VMs 00:37:06.308 INFO: killing vhost app 00:37:06.308 INFO: EXIT DONE 00:37:06.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:06.884 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:06.884 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:06.884 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:37:06.884 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:37:07.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:07.712 Cleaning 00:37:07.712 Removing: /var/run/dpdk/spdk0/config 00:37:07.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:07.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:07.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:07.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:07.712 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:07.712 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:07.712 Removing: /var/run/dpdk/spdk0 00:37:07.712 Removing: /var/run/dpdk/spdk_pid57760 00:37:07.712 Removing: /var/run/dpdk/spdk_pid57995 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58224 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58328 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58378 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58512 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58530 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58741 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58846 00:37:07.712 Removing: /var/run/dpdk/spdk_pid58953 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59075 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59182 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59217 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59259 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59330 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59441 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59922 00:37:07.712 Removing: /var/run/dpdk/spdk_pid59997 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60071 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60087 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60242 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60263 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60411 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60433 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60497 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60520 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60584 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60608 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60803 00:37:07.712 Removing: /var/run/dpdk/spdk_pid60840 00:37:07.713 Removing: /var/run/dpdk/spdk_pid60923 00:37:07.713 Removing: /var/run/dpdk/spdk_pid61117 00:37:07.713 Removing: /var/run/dpdk/spdk_pid61212 00:37:07.713 Removing: /var/run/dpdk/spdk_pid61254 00:37:07.713 Removing: /var/run/dpdk/spdk_pid61732 00:37:07.713 Removing: /var/run/dpdk/spdk_pid61835 00:37:07.713 Removing: /var/run/dpdk/spdk_pid61950 00:37:07.713 Removing: /var/run/dpdk/spdk_pid62003 00:37:07.713 Removing: /var/run/dpdk/spdk_pid62034 00:37:07.713 Removing: /var/run/dpdk/spdk_pid62118 00:37:07.713 Removing: /var/run/dpdk/spdk_pid62755 00:37:07.713 Removing: /var/run/dpdk/spdk_pid62797 00:37:07.713 Removing: /var/run/dpdk/spdk_pid63319 00:37:07.713 Removing: /var/run/dpdk/spdk_pid63423 00:37:07.713 Removing: /var/run/dpdk/spdk_pid63542 00:37:07.713 Removing: /var/run/dpdk/spdk_pid63596 00:37:07.713 Removing: /var/run/dpdk/spdk_pid63622 00:37:07.713 Removing: /var/run/dpdk/spdk_pid63647 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65536 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65684 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65688 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65700 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65747 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65751 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65763 00:37:07.713 Removing: /var/run/dpdk/spdk_pid65808 00:37:07.973 Removing: /var/run/dpdk/spdk_pid65812 00:37:07.973 Removing: /var/run/dpdk/spdk_pid65824 00:37:07.973 Removing: /var/run/dpdk/spdk_pid65875 00:37:07.973 Removing: /var/run/dpdk/spdk_pid65879 00:37:07.973 Removing: /var/run/dpdk/spdk_pid65891 00:37:07.973 Removing: /var/run/dpdk/spdk_pid67298 00:37:07.973 Removing: /var/run/dpdk/spdk_pid67410 00:37:07.973 Removing: /var/run/dpdk/spdk_pid68828 00:37:07.973 Removing: /var/run/dpdk/spdk_pid70568 00:37:07.973 Removing: /var/run/dpdk/spdk_pid70653 00:37:07.973 Removing: /var/run/dpdk/spdk_pid70728 00:37:07.973 Removing: /var/run/dpdk/spdk_pid70839 00:37:07.973 Removing: /var/run/dpdk/spdk_pid70931 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71038 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71112 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71193 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71306 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71403 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71502 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71586 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71663 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71773 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71870 00:37:07.973 Removing: /var/run/dpdk/spdk_pid71966 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72046 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72127 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72231 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72334 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72430 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72510 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72584 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72665 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72747 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72856 00:37:07.973 Removing: /var/run/dpdk/spdk_pid72948 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73054 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73134 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73211 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73290 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73366 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73475 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73566 00:37:07.973 Removing: /var/run/dpdk/spdk_pid73723 00:37:07.973 Removing: /var/run/dpdk/spdk_pid74014 00:37:07.973 Removing: /var/run/dpdk/spdk_pid74056 00:37:07.973 Removing: /var/run/dpdk/spdk_pid74543 00:37:07.973 Removing: /var/run/dpdk/spdk_pid74731 00:37:07.973 Removing: /var/run/dpdk/spdk_pid74831 00:37:07.973 Removing: /var/run/dpdk/spdk_pid74947 00:37:07.973 Removing: /var/run/dpdk/spdk_pid75008 00:37:07.973 Removing: /var/run/dpdk/spdk_pid75034 00:37:07.973 Removing: /var/run/dpdk/spdk_pid75323 00:37:07.973 Removing: /var/run/dpdk/spdk_pid75389 00:37:07.973 Removing: /var/run/dpdk/spdk_pid75476 00:37:07.973 Removing: /var/run/dpdk/spdk_pid75899 00:37:07.973 Removing: /var/run/dpdk/spdk_pid76040 00:37:07.973 Removing: /var/run/dpdk/spdk_pid76836 00:37:07.973 Removing: /var/run/dpdk/spdk_pid76985 00:37:07.973 Removing: /var/run/dpdk/spdk_pid77198 00:37:07.973 Removing: /var/run/dpdk/spdk_pid77305 00:37:07.973 Removing: /var/run/dpdk/spdk_pid77660 00:37:07.973 Removing: /var/run/dpdk/spdk_pid77939 00:37:07.973 Removing: /var/run/dpdk/spdk_pid78290 00:37:07.973 Removing: /var/run/dpdk/spdk_pid78498 00:37:07.973 Removing: /var/run/dpdk/spdk_pid78641 00:37:07.973 Removing: /var/run/dpdk/spdk_pid78705 00:37:07.973 Removing: /var/run/dpdk/spdk_pid78850 00:37:07.973 Removing: /var/run/dpdk/spdk_pid78885 00:37:07.973 Removing: /var/run/dpdk/spdk_pid78943 00:37:07.973 Removing: /var/run/dpdk/spdk_pid79155 00:37:07.973 Removing: /var/run/dpdk/spdk_pid79403 00:37:07.973 Removing: /var/run/dpdk/spdk_pid79827 00:37:07.973 Removing: /var/run/dpdk/spdk_pid80277 00:37:07.973 Removing: /var/run/dpdk/spdk_pid80743 00:37:07.973 Removing: /var/run/dpdk/spdk_pid81281 00:37:07.973 Removing: /var/run/dpdk/spdk_pid81429 00:37:07.973 Removing: /var/run/dpdk/spdk_pid81530 00:37:07.973 Removing: /var/run/dpdk/spdk_pid82292 00:37:07.973 Removing: /var/run/dpdk/spdk_pid82367 00:37:07.973 Removing: /var/run/dpdk/spdk_pid82833 00:37:07.973 Removing: /var/run/dpdk/spdk_pid83249 00:37:08.238 Removing: /var/run/dpdk/spdk_pid83760 00:37:08.238 Removing: /var/run/dpdk/spdk_pid83910 00:37:08.238 Removing: /var/run/dpdk/spdk_pid83963 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84030 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84093 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84164 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84411 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84489 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84561 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84636 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84675 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84744 00:37:08.238 Removing: /var/run/dpdk/spdk_pid84881 00:37:08.238 Clean 00:37:08.238 13:31:55 -- common/autotest_common.sh@1453 -- # return 0 00:37:08.238 13:31:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:08.238 13:31:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.238 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:37:08.238 13:31:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:08.238 13:31:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.238 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:37:08.238 13:31:55 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:08.238 13:31:55 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:08.238 13:31:55 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:08.238 13:31:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:08.238 13:31:55 -- spdk/autotest.sh@398 -- # hostname 00:37:08.238 13:31:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:08.499 geninfo: WARNING: invalid characters removed from testname! 00:37:35.038 13:32:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:39.252 13:32:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:41.788 13:32:28 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:44.336 13:32:31 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:47.665 13:32:34 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:50.196 13:32:37 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:53.477 13:32:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:53.477 13:32:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:53.477 13:32:39 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:53.477 13:32:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:53.477 13:32:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:53.477 13:32:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:53.477 + [[ -n 5293 ]] 00:37:53.477 + sudo kill 5293 00:37:53.486 [Pipeline] } 00:37:53.502 [Pipeline] // timeout 00:37:53.507 [Pipeline] } 00:37:53.522 [Pipeline] // stage 00:37:53.527 [Pipeline] } 00:37:53.541 [Pipeline] // catchError 00:37:53.550 [Pipeline] stage 00:37:53.552 [Pipeline] { (Stop VM) 00:37:53.567 [Pipeline] sh 00:37:53.855 + vagrant halt 00:37:58.045 ==> default: Halting domain... 00:38:03.320 [Pipeline] sh 00:38:03.598 + vagrant destroy -f 00:38:06.883 ==> default: Removing domain... 00:38:07.462 [Pipeline] sh 00:38:07.742 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:38:07.752 [Pipeline] } 00:38:07.771 [Pipeline] // stage 00:38:07.779 [Pipeline] } 00:38:07.794 [Pipeline] // dir 00:38:07.799 [Pipeline] } 00:38:07.814 [Pipeline] // wrap 00:38:07.820 [Pipeline] } 00:38:07.832 [Pipeline] // catchError 00:38:07.842 [Pipeline] stage 00:38:07.844 [Pipeline] { (Epilogue) 00:38:07.858 [Pipeline] sh 00:38:08.139 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:14.719 [Pipeline] catchError 00:38:14.721 [Pipeline] { 00:38:14.735 [Pipeline] sh 00:38:15.017 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:15.276 Artifacts sizes are good 00:38:15.285 [Pipeline] } 00:38:15.300 [Pipeline] // catchError 00:38:15.312 [Pipeline] archiveArtifacts 00:38:15.322 Archiving artifacts 00:38:15.486 [Pipeline] cleanWs 00:38:15.497 [WS-CLEANUP] Deleting project workspace... 00:38:15.497 [WS-CLEANUP] Deferred wipeout is used... 00:38:15.502 [WS-CLEANUP] done 00:38:15.504 [Pipeline] } 00:38:15.520 [Pipeline] // stage 00:38:15.525 [Pipeline] } 00:38:15.539 [Pipeline] // node 00:38:15.545 [Pipeline] End of Pipeline 00:38:15.581 Finished: SUCCESS